Search results for: USP4 method
13455 The Effect of Reaction Time on the Morphology and Phase of Quaternary Ferrite Nanoparticles (FeCoCrO₄) Synthesised from a Single Source Precursor
Authors: Khadijat Olabisi Abdulwahab, Mohammad Azad Malik, Paul O'Brien, Grigore Timco, Floriana Tuna
Abstract:
The synthesis of spinel ferrite nanoparticles with a narrow size distribution is very crucial in their numerous applications including information storage, hyperthermia treatment, drug delivery, contrast agent in magnetic resonance imaging, catalysis, sensors, and environmental remediation. Ferrites have the general formula MFe₂O₄ (M = Fe, Co, Mn, Ni, Zn e.t.c) and possess remarkable electrical and magnetic properties which depend on the cations, method of preparation, size and their site occupancies. To the best of our knowledge, there are no reports on the use of a single source precursor to synthesise quaternary ferrite nanoparticles. Here in, we demonstrated the use of trimetallic iron pivalate cluster [CrCoFeO(O₂CᵗBu)₆(HO₂CᵗBu)₃] as a single source precursor to synthesise monodisperse cobalt chromium ferrite (FeCoCrO₄) nanoparticles by the hot injection thermolysis method. The precursor was thermolysed in oleylamine, oleic acid, with diphenyl ether as solvent at 260 °C. The effect of reaction time on the stoichiometry, phases or morphology of the nanoparticles was studied. The p-XRD patterns of the nanoparticles obtained after one hour was pure phase of cubic iron cobalt chromium ferrite (FeCoCrO₄). TEM showed that a more monodispersed spherical ferrite nanoparticles were obtained after one hour. Magnetic measurements revealed that the ferrite particles are superparamagnetic at room temperature. The nanoparticles were characterised by Powder X-ray Diffraction (p-XRD), Transmission Electron Microscopy (TEM), Energy Dispersive Spectroscopy (EDS) and Super Conducting Quantum Interference Device (SQUID).Keywords: cobalt chromium ferrite, colloidal, hot injection thermolysis, monodisperse, reaction time, single source precursor, quaternary ferrite nanoparticles
Procedia PDF Downloads 31513454 The Construction Technology of Dryer Silo Materials to Grains Made from Webbing Bamboo: A Drying Technology Solutions to Empowerment Farmers in Yogyakarta, Indonesia
Authors: Nursigit Bintoro, Abadi Barus, Catur Setyo Dedi Pamungkas
Abstract:
Indonesia is an agrarian country have almost population work as farmers. One of the popular agriculture commodity in Indonesia is paddy and corn. Production of paddy and corn are increased, but not balanced to the development of appropriate technology to farmers. Methods of drying applied with farmers still using sunshine. Drying by this method has some drawbacks, such as differences moisture content of corn grains, time used to dry around 3 days, and less quality of the products obtained. Beside it, the method of drying by using sunshine can’t do when the rainy season arrives. On this season the product obtained has less quality. One solution to the above problems is to create a dryer with simple technology. That technology is made silo dryer from webbing bamboo and wood. This technology is applicable to be applied to farmers' groups as well as the creation technology is quite cheap. The experiment material used in this research will be obtained from the corn grains. The equipment used are woven bamboo with a height of 3 meters and have capacity of up to 900 kgs as a silo, gas, burner, blower, bucket elevators, thermocouple, Arduino microcontroller 2560. This tools automatically records all the data of temperature and relative humidity. During on drying, each 30 minutes take 9 sample for measuring moisture content with moisture meter. By using this technology, farmers can save time, energy, and cost to the drying their agriculture product. In addition, by using this technology have good quality moisture content of grains and have a longer shelf life because the temperature when the heating process is controlled. Therefore, this technology is applicable to be applied to the public because the materials used to make the dryer easier to find, cheaper, and manufacture of the dryer made simple with good quality.Keywords: grains, dryer, moisture content, appropriate technology
Procedia PDF Downloads 35813453 Dairy Wastewater Treatment by Electrochemical and Catalytic Method
Authors: Basanti Ekka, Talis Juhna
Abstract:
Dairy industrial effluents originated by the typical processing activities are composed of various organic and inorganic constituents, and these include proteins, fats, inorganic salts, antibiotics, detergents, sanitizers, pathogenic viruses, bacteria, etc. These contaminants are harmful to not only human beings but also aquatic flora and fauna. Because consisting of large classes of contaminants, the specific targeted removal methods available in the literature are not viable solutions on the industrial scale. Therefore, in this on-going research, a series of coagulation, electrochemical, and catalytic methods will be employed. The bulk coagulation and electrochemical methods can wash off most of the contaminants, but some of the harmful chemicals may slip in; therefore, specific catalysts designed and synthesized will be employed for the removal of targeted chemicals. In the context of Latvian dairy industries, presently, work is under progress on the characterization of dairy effluents by total organic carbon (TOC), Inductively Coupled Plasma Mass Spectrometry (ICP-MS)/ Inductively Coupled Plasma Optical Emission Spectrometry (ICP-OES), High-Performance Liquid Chromatography (HPLC), Gas Chromatography-Mass Spectrometry (GC-MS), and Mass Spectrometry. After careful evaluation of the dairy effluents, a cost-effective natural coagulant will be employed prior to advanced electrochemical technology such as electrocoagulation and electro-oxidation as a secondary treatment process. Finally, graphene oxide (GO) based hybrid materials will be used for post-treatment of dairy wastewater as graphene oxide has been widely applied in various fields such as environmental remediation and energy production due to the presence of various oxygen-containing groups. Modified GO will be used as a catalyst for the removal of remaining contaminants after the electrochemical process.Keywords: catalysis, dairy wastewater, electrochemical method, graphene oxide
Procedia PDF Downloads 14413452 Data Mining Spatial: Unsupervised Classification of Geographic Data
Authors: Chahrazed Zouaoui
Abstract:
In recent years, the volume of geospatial information is increasing due to the evolution of communication technologies and information, this information is presented often by geographic information systems (GIS) and stored on of spatial databases (BDS). The classical data mining revealed a weakness in knowledge extraction at these enormous amounts of data due to the particularity of these spatial entities, which are characterized by the interdependence between them (1st law of geography). This gave rise to spatial data mining. Spatial data mining is a process of analyzing geographic data, which allows the extraction of knowledge and spatial relationships from geospatial data, including methods of this process we distinguish the monothematic and thematic, geo- Clustering is one of the main tasks of spatial data mining, which is registered in the part of the monothematic method. It includes geo-spatial entities similar in the same class and it affects more dissimilar to the different classes. In other words, maximize intra-class similarity and minimize inter similarity classes. Taking account of the particularity of geo-spatial data. Two approaches to geo-clustering exist, the dynamic processing of data involves applying algorithms designed for the direct treatment of spatial data, and the approach based on the spatial data pre-processing, which consists of applying clustering algorithms classic pre-processed data (by integration of spatial relationships). This approach (based on pre-treatment) is quite complex in different cases, so the search for approximate solutions involves the use of approximation algorithms, including the algorithms we are interested in dedicated approaches (clustering methods for partitioning and methods for density) and approaching bees (biomimetic approach), our study is proposed to design very significant to this problem, using different algorithms for automatically detecting geo-spatial neighborhood in order to implement the method of geo- clustering by pre-treatment, and the application of the bees algorithm to this problem for the first time in the field of geo-spatial.Keywords: mining, GIS, geo-clustering, neighborhood
Procedia PDF Downloads 37513451 Growth and Characterization of Cuprous Oxide (Cu2O) Nanorods by Reactive Ion Beam Sputter Deposition (Ibsd) Method
Authors: Assamen Ayalew Ejigu, Liang-Chiun Chao
Abstract:
In recent semiconductor and nanotechnology, quality material synthesis, proper characterizations, and productions are the big challenges. As cuprous oxide (Cu2O) is a promising semiconductor material for photovoltaic (PV) and other optoelectronic applications, this study was aimed at to grow and characterize high quality Cu2O nanorods for the improvement of the efficiencies of thin film solar cells and other potential applications. In this study, well-structured cuprous oxide (Cu2O) nanorods were successfully fabricated using IBSD method in which the Cu2O samples were grown on silicon substrates with a substrate temperature of 400°C in an IBSD chamber of pressure of 4.5 x 10-5 torr using copper as a target material. Argon, and oxygen gases were used as a sputter and reactive gases, respectively. The characterization of the Cu2O nanorods (NRs) were done in comparison with Cu2O thin film (TF) deposited with the same method but with different Ar:O2 flow rates. With Ar:O2 ratio of 9:1 single phase pure polycrystalline Cu2O NRs with diameter of ~500 nm and length of ~4.5 µm were grow. Increasing the oxygen flow rates, pure single phase polycrystalline Cu2O thin film (TF) was found at Ar:O2 ratio of 6:1. The field emission electron microscope (FE-SEM) measurements showed that both samples have smooth morphologies. X-ray diffraction and Rama scattering measurements reveals the presence of single phase Cu2O in both samples. The differences in Raman scattering and photoluminescence (PL) bands of the two samples were also investigated and the results showed us there are differences in intensities, in number of bands and in band positions. Raman characterization shows that the Cu2O NRs sample has pronounced Raman band intensities, higher numbers of Raman bands than the Cu2O TF which has only one second overtone Raman signal at 2 (217 cm-1). The temperature dependent photoluminescence (PL) spectra measurements, showed that the defect luminescent band centered at 720 nm (1.72 eV) is the dominant one for the Cu2O NRs and the 640 nm (1.937 eV) band was the only PL band observed from the Cu2O TF. The difference in optical and structural properties of the samples comes from the oxygen flow rate change in the process window of the samples deposition. This gave us a roadmap for further investigation of the electrical and other optical properties for the tunable fabrication of the Cu2O nano/micro structured sample for the improvement of the efficiencies of thin film solar cells in addition to other potential applications. Finally, the novel morphologies, excellent structural and optical properties seen exhibits the grown Cu2O NRs sample has enough quality to be used in further research of the nano/micro structured semiconductor materials.Keywords: defect levels, nanorods, photoluminescence, Raman modes
Procedia PDF Downloads 24113450 Navigating the Case-Based Learning Multimodal Learning Environment: A Qualitative Study Across the First-Year Medical Students
Authors: Bhavani Veasuvalingam
Abstract:
Case-based learning (CBL) is a popular instructional method aimed to bridge theory to clinical practice. This study aims to explore CBL mixed modality curriculum in influencing students’ learning styles and strategies that support learning. An explanatory sequential mixed method study was employed with initial phase, 44-itemed Felderman’s Index of Learning Style (ILS) questionnaire employed across year one medical students (n=142) using convenience sampling to describe the preferred learning styles. The qualitative phase utilised three focus group discussions (FGD) to explore in depth on the multimodal learning style exhibited by the students. Most students preferred combination of learning stylesthat is reflective, sensing, visual and sequential i.e.: RSVISeq style (24.64%) from the ILS analysis. The frequency of learning preference from processing to understanding were well balanced, with sequential-global domain (66.2%); sensing-intuitive (59.86%), active- reflective (57%), and visual-verbal (51.41%). The qualitative data reported three major themes, namely Theme 1: CBL mixed modalities navigates learners’ learning style; Theme 2: Multimodal learners active learning strategies supports learning. Theme 3: CBL modalities facilitating theory into clinical knowledge. Both quantitative and qualitative study strongly reports the multimodal learning style of the year one medical students. Medical students utilise multimodal learning styles to attain the clinical knowledge when learning with CBL mixed modalities. Educators’ awareness of the multimodal learning style is crucial in delivering the CBL mixed modalities effectively, considering strategic pedagogical support students to engage and learn CBL in bridging the theoretical knowledge into clinical practice.Keywords: case-based learning, learnign style, medical students, learning
Procedia PDF Downloads 9513449 A Framework for Teaching the Intracranial Pressure Measurement through an Experimental Model
Authors: Christina Klippel, Lucia Pezzi, Silvio Neto, Rafael Bertani, Priscila Mendes, Flavio Machado, Aline Szeliga, Maria Cosendey, Adilson Mariz, Raquel Santos, Lys Bendett, Pedro Velasco, Thalita Rolleigh, Bruna Bellote, Daria Coelho, Bruna Martins, Julia Almeida, Juliana Cerqueira
Abstract:
This project presents a framework for teaching intracranial pressure monitoring (ICP) concepts using a low-cost experimental model in a neurointensive care education program. Data concerning ICP monitoring contribute to the patient's clinical assessment and may dictate the course of action of a health team (nursing, medical staff) and influence decisions to determine the appropriate intervention. This study aims to present a safe method for teaching ICP monitoring to medical students in a Simulation Center. Methodology: Medical school teachers, along with students from the 4th year, built an experimental model for teaching ICP measurement. The model consists of a mannequin's head with a plastic bag inside simulating the cerebral ventricle and an inserted ventricular catheter connected to the ICP monitoring system. The bag simulating the ventricle can also be changed for others containing bloody or infected simulated cerebrospinal fluid. On the mannequin's ear, there is a blue point indicating the right place to set the "zero point" for accurate pressure reading. The educational program includes four steps: 1st - Students receive a script on ICP measurement for reading before training; 2nd - Students watch a video about the subject created in the Simulation Center demonstrating each step of the ICP monitoring and the proper care, such as: correct positioning of the patient, anatomical structures to establish the zero point for ICP measurement and a secure range of ICP; 3rd - Students train the procedure in the model. Teachers help students during training; 4th - Student assessment based on a checklist form. Feedback and correction of wrong actions. Results: Students expressed interest in learning ICP monitoring. Tests concerning the hit rate are still being performed. ICP's final results and video will be shown at the event. Conclusion: The study of intracranial pressure measurement based on an experimental model consists of an effective and controlled method of learning and research, more appropriate for teaching neurointensive care practices. Assessment based on a checklist form helps teachers keep track of student learning progress. This project offers medical students a safe method to develop intensive neurological monitoring skills for clinical assessment of patients with neurological disorders.Keywords: neurology, intracranial pressure, medical education, simulation
Procedia PDF Downloads 17213448 Acetalization of Carbonyl Compounds by Using Al2 (HPO4)3 under Green Condition Mg HPO4
Authors: Fariba Jafari, Samaneh Heydarian
Abstract:
Al2(HPO4)3 was easily prepared and used as a solid acid in acetalization of carbonyl compounds at room temperature and under solvent-free conditions. The protection was done in short reaction times and in good to high isolated yields. The cheapness and availability of this reagent with easy procedure and work-up make this method attractive for the organic synthesis.Keywords: acetalization, acid catalysis, carbonylcompounds, green condition, protection
Procedia PDF Downloads 31613447 Spirituality Enhanced with Cognitive-Behavioural Techniques: An Effective Method for Women with Extramarital Infidelity: A Literature Review
Authors: Setareh Yousife
Abstract:
Introduction: Studies suggest that Extramarital Infidelity (EMI) variants, such as sexual and emotional infidelities are increasing in marriage relationships. To our knowledge, less is known about what therapies and mental-hygiene factors can prevent more effective this behavior and address it. Spiritual and cognitive-behavioural health have proven to reduce marital conflict, Increase marital satisfaction and commitment. Objective: This study aims to discuss the effectiveness of spiritual counseling combined with Cognitive-behavioural techniques in addressing Extramarital Infidelity. Method: Descriptive, analytical, and intervention articles indexed in SID, Noormags, Scopus, Iranmedex, Web of Science and PubMed databases, and Google Scholar were searched. We focused on Studies in which Women with extramarital relationships, including heterosexual married couples-only studies and spirituality/religion and CBT as coping techniques used as EMI therapy. Finally, the full text of all eligible articles was prepared and discussed in this review. Results: 25 publications were identified, and their textual analysis facilitated through four thematic approaches: The nature of EMI in Women, the meaning of spirituality in the context of mental health and human behavior as well as psychotherapy; Spirituality integrated into Cognitive-Behavioral approach, The role of Spirituality as a deterrent to EMI. Conclusions: The integration of the findings discussed herein suggests that the application of cognitive and behavioral skills in addressing these kinds of destructive family-based relationships is inevitable. As treatments based on religion/spirituality or cognition/behavior do not seem adequately effective in dealing with EMI, the combination of these approaches may lead to higher efficacy in fewer sessions and a shorter time.Keywords: spirituality, religion, cognitive behavioral therapy, extramarital relation, infidelity
Procedia PDF Downloads 25413446 Quantitative Evaluation of Mitral Regurgitation by Using Color Doppler Ultrasound
Authors: Shang-Yu Chiang, Yu-Shan Tsai, Shih-Hsien Sung, Chung-Ming Lo
Abstract:
Mitral regurgitation (MR) is a heart disorder which the mitral valve does not close properly when the heart pumps out blood. MR is the most common form of valvular heart disease in the adult population. The diagnostic echocardiographic finding of MR is straightforward due to the well-known clinical evidence. In the determination of MR severity, quantification of sonographic findings would be useful for clinical decision making. Clinically, the vena contracta is a standard for MR evaluation. Vena contracta is the point in a blood stream where the diameter of the stream is the least, and the velocity is the maximum. The quantification of vena contracta, i.e. the vena contracta width (VCW) at mitral valve, can be a numeric measurement for severity assessment. However, manually delineating the VCW may not accurate enough. The result highly depends on the operator experience. Therefore, this study proposed an automatic method to quantify VCW to evaluate MR severity. Based on color Doppler ultrasound, VCW can be observed from the blood flows to the probe as the appearance of red or yellow area. The corresponding brightness represents the value of the flow rate. In the experiment, colors were firstly transformed into HSV (hue, saturation and value) to be closely align with the way human vision perceives red and yellow. Using ellipse to fit the high flow rate area in left atrium, the angle between the mitral valve and the ultrasound probe was calculated to get the vertical shortest diameter as the VCW. Taking the manual measurement as the standard, the method achieved only 0.02 (0.38 vs. 0.36) to 0.03 (0.42 vs. 0.45) cm differences. The result showed that the proposed automatic VCW extraction can be efficient and accurate for clinical use. The process also has the potential to reduce intra- or inter-observer variability at measuring subtle distances.Keywords: mitral regurgitation, vena contracta, color doppler, image processing
Procedia PDF Downloads 37013445 Calculation of Electronic Structures of Nickel in Interaction with Hydrogen by Density Functional Theoretical (DFT) Method
Authors: Choukri Lekbir, Mira Mokhtari
Abstract:
Hydrogen-Materials interaction and mechanisms can be modeled at nano scale by quantum methods. In this work, the effect of hydrogen on the electronic properties of a cluster material model «nickel» has been studied by using of density functional theoretical (DFT) method. Two types of clusters are optimized: Nickel and hydrogen-nickel system. In the case of nickel clusters (n = 1-6) without presence of hydrogen, three types of electronic structures (neutral, cationic and anionic), have been optimized according to three basis sets calculations (B3LYP/LANL2DZ, PW91PW91/DGDZVP2, PBE/DGDZVP2). The comparison of binding energies and bond lengths of the three structures of nickel clusters (neutral, cationic and anionic) obtained by those basis sets, shows that the results of neutral and anionic nickel clusters are in good agreement with the experimental results. In the case of neutral and anionic nickel clusters, comparing energies and bond lengths obtained by the three bases, shows that the basis set PBE/DGDZVP2 is most suitable to experimental results. In the case of anionic nickel clusters (n = 1-6) with presence of hydrogen, the optimization of the hydrogen-nickel (anionic) structures by using of the basis set PBE/DGDZVP2, shows that the binding energies and bond lengths increase compared to those obtained in the case of anionic nickel clusters without the presence of hydrogen, that reveals the armor effect exerted by hydrogen on the electronic structure of nickel, which due to the storing of hydrogen energy within nickel clusters structures. The comparison between the bond lengths for both clusters shows the expansion effect of clusters geometry which due to hydrogen presence.Keywords: binding energies, bond lengths, density functional theoretical, geometry optimization, hydrogen energy, nickel cluster
Procedia PDF Downloads 42213444 Treatment of Interferograms Image of Perturbation Processes in Metallic Samples by Optical Method
Authors: Daira Radouane, Naim Boudmagh, Hamada Adel
Abstract:
The but of this handling is to use the technique of the shearing with a mechanism lapping machine of image: a prism of Wollaston. We want to characterize this prism in order to be able to employ it later on in an analysis by shearing. A prism of Wollaston is a prism produced in a birefringent material i.e. having two indexes of refraction. This prism is cleaved so as to present the directions associated with these indices in its face with entry. It should be noted that these directions are perpendicular between them.Keywords: non destructive control, aluminium, interferometry, treatment of image
Procedia PDF Downloads 33113443 Visualization Tool for EEG Signal Segmentation
Authors: Sweeti, Anoop Kant Godiyal, Neha Singh, Sneh Anand, B. K. Panigrahi, Jayasree Santhosh
Abstract:
This work is about developing a tool for visualization and segmentation of Electroencephalograph (EEG) signals based on frequency domain features. Change in the frequency domain characteristics are correlated with change in mental state of the subject under study. Proposed algorithm provides a way to represent the change in the mental states using the different frequency band powers in form of segmented EEG signal. Many segmentation algorithms have been suggested in literature having application in brain computer interface, epilepsy and cognition studies that have been used for data classification. But the proposed method focusses mainly on the better presentation of signal and that’s why it could be a good utilization tool for clinician. Algorithm performs the basic filtering using band pass and notch filters in the range of 0.1-45 Hz. Advanced filtering is then performed by principal component analysis and wavelet transform based de-noising method. Frequency domain features are used for segmentation; considering the fact that the spectrum power of different frequency bands describes the mental state of the subject. Two sliding windows are further used for segmentation; one provides the time scale and other assigns the segmentation rule. The segmented data is displayed second by second successively with different color codes. Segment’s length can be selected as per need of the objective. Proposed algorithm has been tested on the EEG data set obtained from University of California in San Diego’s online data repository. Proposed tool gives a better visualization of the signal in form of segmented epochs of desired length representing the power spectrum variation in data. The algorithm is designed in such a way that it takes the data points with respect to the sampling frequency for each time frame and so it can be improved to use in real time visualization with desired epoch length.Keywords: de-noising, multi-channel data, PCA, power spectra, segmentation
Procedia PDF Downloads 39713442 Structural Inequality and Precarious Workforce: The Role of Labor Laws in Destabilizing the Labor Force in Iran
Authors: Iman Shabanzadeh
Abstract:
Over the last three decades, the main demands of the Iranian workforce have been focused on three areas: "The right to a decent wage", "The right to organize" and "The right to job security". In order to investigate and analyze this situation, the present study focuses on the component of job security. The purpose of the study is to figure out what mechanisms in Iran's Labor Law have led to the destabilization and undermining of workers' job security. The research method is descriptive-analytical. To collect information, library and document sources in the field of laws related to labor rights in Iran and, semi-structured interviews with experts have been used. In the data analysis stage, the qualitative content analysis method was also used. The trend analysis of the statistics related to the labor force situation in Iran in the last three decades shows that the employment structure has been facing an increase in the active population, but in the last decade, a large part of this population has been mainly active in the service sector, and contract-free enterprises, so a smaller share of this employment has insurance coverage and a larger share has underemployment. In this regard, the results of this study show that four contexts have been proposed as the main legal and executive mechanisms of labor instability in Iran, which are: 1) temporaryization of the labor force by providing different interpretations of labor law, 2) adjustment labor in the public sector and the emergence of manpower contracting companies, 3) the cessation of labor law protection of workers in small workshops and 4) the existence of numerous restrictions on the effective organization of workers. The theoretical conclusion of this article is that the main root of the challenges of the labor society and the destabilized workforce in Iran is the existence of structural inequalities in the field of labor security, whose traces can be seen in the legal provisions and executive regulations of this field.Keywords: inequality, precariat, temporaryization, labor force, labor law
Procedia PDF Downloads 6113441 Human-Machine Cooperation in Facial Comparison Based on Likelihood Scores
Authors: Lanchi Xie, Zhihui Li, Zhigang Li, Guiqiang Wang, Lei Xu, Yuwen Yan
Abstract:
Image-based facial features can be classified into category recognition features and individual recognition features. Current automated face recognition systems extract a specific feature vector of different dimensions from a facial image according to their pre-trained neural network. However, to improve the efficiency of parameter calculation, an algorithm generally reduces the image details by pooling. The operation will overlook the details concerned much by forensic experts. In our experiment, we adopted a variety of face recognition algorithms based on deep learning, compared a large number of naturally collected face images with the known data of the same person's frontal ID photos. Downscaling and manual handling were performed on the testing images. The results supported that the facial recognition algorithms based on deep learning detected structural and morphological information and rarely focused on specific markers such as stains and moles. Overall performance, distribution of genuine scores and impostor scores, and likelihood ratios were tested to evaluate the accuracy of biometric systems and forensic experts. Experiments showed that the biometric systems were skilled in distinguishing category features, and forensic experts were better at discovering the individual features of human faces. In the proposed approach, a fusion was performed at the score level. At the specified false accept rate, the framework achieved a lower false reject rate. This paper contributes to improving the interpretability of the objective method of facial comparison and provides a novel method for human-machine collaboration in this field.Keywords: likelihood ratio, automated facial recognition, facial comparison, biometrics
Procedia PDF Downloads 13013440 Antihyperlipidemia Combination of Simvastatin and Herbal Drink (Conventional Drug Interaction Potential Study and Herbal As Prevention Adverse Effect on Combination Therapy Hyperlipidemia)
Authors: Gesti Prastiti, Maylina Adani, Yuyun darma A. N., M. Khilmi F., Yunita Wahyu Pratiwi
Abstract:
Combination therapy may allow interaction on two drugs or more that can give adverse effects on patients. Simvastatin is a drug of antihyperlipidemia it can interact with drugs which work on cytochrome P450 CYP3A4 because it can interfere the performance of simvastatin. Flavonoid found in plants can inhibit the cytochrome P450 CYP3A4 if taken with simvastatin and can increase simvastatin levels in the body and increases the potential side effects of simvastatin such as myopati and rhabdomyolysis. Green tea leaves and mint are herbal medicine which has the effect of antihiperlipidemia. This study aims to determine the potential interaction of simvastatin with herbal drinks (green tea leaves and mint). This research method are experimental post-test only control design. Test subjects were divided into 5 groups: normal group, negative control group, simvastatin group, a combination of green tea group and the combination group mint leaves. The study was conducted over 32 days and total cholesterol levels were analyzed by enzymatic colorimetric test method. Results of this study is the obtainment of average value of total cholesterol in each group, the normal group (65.92 mg/dL), the negative control group the average total cholesterol test in the normal group was (69.86 mg/dL), simvastatin group (58.96 mg/dL), the combination of green tea group (58.96 mg/dL), and the combination of mint leaves (63.68 mg/dL). The conclusion is between simvastatin combination therapy with herbal drinks have the potential for pharmacodynamic interactions with a synergistic effect, antagonist, and a powerful additive, so the combination therapy are no more effective than a single administration of simvastatin therapy.Keywords: hyperlipidemia, simvastatin, herbal drinks, green tea leaves, mint leaves, drug interactions
Procedia PDF Downloads 39513439 Highly Efficient Ca-Doped CuS Counter Electrodes for Quantum Dot Sensitized Solar Cells
Authors: Mohammed Panthakkal Abdul Muthalif, Shanmugasundaram Kanagaraj, Jumi Park, Hangyu Park, Youngson Choe
Abstract:
The present study reports the incorporation of calcium ions into the CuS counter electrodes (CEs) in order to modify the photovoltaic performance of quantum dot-sensitized solar cells (QDSSCs). Metal ion-doped CuS thin film was prepared by the chemical bath deposition (CBD) method on FTO substrate and used directly as counter electrodes for TiO₂/CdS/CdSe/ZnS photoanodes based QDSSCs. For the Ca-doped CuS thin films, copper nitrate and thioacetamide were used as anionic and cationic precursors. Calcium nitrate tetrahydrate was used as doping material. The surface morphology of Ca-doped CuS CEs indicates that the fragments are uniformly distributed, and the structure is densely packed with high crystallinity. The changes observed in the diffraction patterns suggest that Ca dopant can introduce increased disorder into CuS material structure. EDX analysis was employed to determine the elemental identification, and the results confirmed the presence of Cu, S, and Ca on the FTO glass substrate. The photovoltaic current density – voltage characteristics of Ca-doped CuS CEs shows the specific improvements in open circuit voltage decay (Voc) and short-circuit current density (Jsc). Electrochemical impedance spectroscopy results display that Ca-doped CuS CEs have greater electrocatalytic activity and charge transport capacity than bare CuS. All the experimental results indicate that 20% Ca-doped CuS CE based QDSSCs exhibit high power conversion efficiency (η) of 4.92%, short circuit current density of 15.47 mA cm⁻², open circuit photovoltage of 0.611 V, and fill factor (FF) of 0.521 under illumination of one sun.Keywords: Ca-doped CuS counter electrodes, surface morphology, chemical bath deposition method, electrocatalytic activity
Procedia PDF Downloads 16413438 Some Issues with Extension of an HPC Cluster
Authors: Pil Seong Park
Abstract:
Homemade HPC clusters are widely used in many small labs, because they are easy to build and cost-effective. Even though incremental growth is an advantage of clusters, it results in heterogeneous systems anyhow. Instead of adding new nodes to the cluster, we can extend clusters to include some other Internet servers working independently on the same LAN, so that we can make use of their idle times, especially during the night. However extension across a firewall raises some security problems with NFS. In this paper, we propose a method to solve such a problem using SSH tunneling, and suggest a modified structure of the cluster that implements it.Keywords: extension of HPC clusters, security, NFS, SSH tunneling
Procedia PDF Downloads 42713437 Performance Comparison of Droop Control Methods for Parallel Inverters in Microgrid
Authors: Ahmed Ismail, Mustafa Baysal
Abstract:
Although the energy source in the world is mainly based on fossil fuels today, there is a need for alternative energy generation systems, which are more economic and environmentally friendly, due to continuously increasing demand of electric energy and lacking power resources and networks. Distributed Energy Resources (DERs) such as fuel cells, wind and solar power have recently become widespread as alternative generation. In order to solve several problems that might be encountered when integrating DERs to power system, the microgrid concept has been proposed. A microgrid can operate both grid connected and island mode to benefit both utility and customers. For most distributed energy resources (DER) which are connected in parallel in LV-grid like micro-turbines, wind plants, fuel cells and PV cells electrical power is generated as a direct current (DC) and converted to an alternative currents (AC) by inverters. So the inverters are assumed to be primary components in a microgrid. There are many control techniques of parallel inverters to manage active and reactive sharing of the loads. Some of them are based on droop method. In literature, the studies are usually focused on improving the transient performance of inverters. In this study, the performance of two different controllers based on droop control method is compared for the inverters operated in parallel without any communication feedback. For this aim, a microgrid in which inverters are controlled by conventional droop controller and modified droop controller is designed. Modified controller is obtained by adding PID into conventional droop control. Active and reactive power sharing performance, voltage and frequency responses of those control methods are measured in several operational cases. Study cases have been simulated by MATLAB-SIMULINK.Keywords: active and reactive power sharing, distributed generation, droop control, microgrid
Procedia PDF Downloads 59213436 Clustering-Based Threshold Model for Condition Rating of Concrete Bridge Decks
Authors: M. Alsharqawi, T. Zayed, S. Abu Dabous
Abstract:
To ensure safety and serviceability of bridge infrastructure, accurate condition assessment and rating methods are needed to provide basis for bridge Maintenance, Repair and Replacement (MRR) decisions. In North America, the common practices to assess condition of bridges are through visual inspection. These practices are limited to detect surface defects and external flaws. Further, the thresholds that define the severity of bridge deterioration are selected arbitrarily. The current research discusses the main deteriorations and defects identified during visual inspection and Non-Destructive Evaluation (NDE). NDE techniques are becoming popular in augmenting the visual examination during inspection to detect subsurface defects. Quality inspection data and accurate condition assessment and rating are the basis for determining appropriate MRR decisions. Thus, in this paper, a novel method for bridge condition assessment using the Quality Function Deployment (QFD) theory is utilized. The QFD model is designed to provide an integrated condition by evaluating both the surface and subsurface defects for concrete bridges. Moreover, an integrated condition rating index with four thresholds is developed based on the QFD condition assessment model and using K-means clustering technique. Twenty case studies are analyzed by applying the QFD model and implementing the developed rating index. The results from the analyzed case studies show that the proposed threshold model produces robust MRR recommendations consistent with decisions and recommendations made by bridge managers on these projects. The proposed method is expected to advance the state of the art of bridges condition assessment and rating.Keywords: concrete bridge decks, condition assessment and rating, quality function deployment, k-means clustering technique
Procedia PDF Downloads 22413435 A Study on the Impact of Covid-19 on Primary Healthcare Workers in Ekiti State, South-West Nigeria
Authors: Adeyinka Adeniran, Omowunmi Bakare, Esther Oluwole, Florence Chieme, Temitope Durojaiye, Modupe Akinyinka, Omobola Ojo, Babatunde Olujobi, Marcus Ilesanmi, Akintunde Ogunsakin
Abstract:
Introduction: Globally, COVID-19 has greatly impacted the human race physically, socially, mentally, and economically. However, healthcare workers seemed to bear the greatest impact. The study, therefore, sought to assess the impact of COVID-19 on the primary healthcare workers in Ekiti, South-west Nigeria. Methods: The study was a cross-sectional descriptive study using a quantitative data collection method of 716 primary healthcare workers in Ekiti state. Respondents were selected using an online convenience sampling method via their social media platforms. Data was collected, collated, and analyzed using SPSS version 25 software and presented as frequency tables, mean and standard deviation. Bivariate and multivariate analyses were conducted using a t-test, and the level of statistical significance was set at p<0.05. Results: Less than half (47.1%) of respondents were between 41-50 age group and a mean age of 44.4+6.4SD. A majority (89.4%) were female, and almost all (96.2%) were married. More than (90%) had ever heard of Coronavirus, and (85.8%) had to spend more money on activities of daily living such as transportation (90.1%), groceries (80.6%), assisting relations (95.8%) and sanitary measures (disinfection) at home (95.0%). COVID-19 had a huge negative impact on about (89.7%) of healthcare workers, with a mean score of 22+4.8. Conclusion: COVID-19 negatively impacted the daily living and professional duties of primary healthcare workers, which reflected their psychological, physical, social, and economic well-being. Disease outbreaks are unlikely to disappear in the near future. Hence, global proactive interventions and homegrown measures should be adopted to protect healthcare workers and save lives.Keywords: Covid-19, health workforce, primary health care, health systems, depression
Procedia PDF Downloads 8413434 Using the ISO 9705 Room Corner Test for Smoke Toxicity Quantification of Polyurethane
Authors: Gabrielle Peck, Ryan Hayes
Abstract:
Polyurethane (PU) foam is typically sold as acoustic foam that is often used as sound insulation in settings such as night clubs and bars. As a construction product, PU is tested by being glued to the walls and ceiling of the ISO 9705 room corner test room. However, when heat is applied to PU foam, it melts and burns as a pool fire due to it being a thermoplastic. The current test layout is unable to accurately measure mass loss and doesn’t allow for the material to burn as a pool fire without seeping out of the test room floor. The lack of mass loss measurement means gas yields pertaining to smoke toxicity analysis can’t be calculated, which makes data comparisons from any other material or test method difficult. Additionally, the heat release measurements are not representative of the actual measurements taken as a lot of the material seeps through the floor (when a tray to catch the melted material is not used). This research aimed to modify the ISO 9705 test to provide the ability to measure mass loss to allow for better calculation of gas yields and understanding of decomposition. It also aimed to accurately measure smoke toxicity in both the doorway and duct and enable dilution factors to be calculated. Finally, the study aimed to examine if doubling the fuel loading would force under-ventilated flaming. The test layout was modified to be a combination of the SBI (single burning item) test set up inside oof the ISO 9705 test room. Polyurethane was tested in two different ways with the aim of altering the ventilation condition of the tests. Test one was conducted using 1 x SBI test rig aiming for well-ventilated flaming. Test two was conducted using 2 x SBI rigs (facing each other inside the test room) (doubling the fuel loading) aiming for under-ventilated flaming. The two different configurations used were successful in achieving both well-ventilated flaming and under-ventilated flaming, shown by the measured equivalence ratios (measured using a phi meter designed and created for these experiments). The findings show that doubling the fuel loading will successfully force under-ventilated flaming conditions to be achieved. This method can therefore be used when trying to replicate post-flashover conditions in future ISO 9705 room corner tests. The radiative heat generated by the two SBI rigs facing each other facilitated a much higher overall heat release resulting in a more severe fire. The method successfully allowed for accurate measurement of smoke toxicity produced from the PU foam in terms of simple gases such as oxygen depletion, CO and CO2. Overall, the proposed test modifications improve the ability to measure the smoke toxicity of materials in different fire conditions on a large-scale.Keywords: flammability, ISO9705, large-scale testing, polyurethane, smoke toxicity
Procedia PDF Downloads 7613433 Machine Learning Techniques in Seismic Risk Assessment of Structures
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this work is to evaluate the advantages and disadvantages of various machine learning techniques in two key steps of seismic hazard and risk assessment of different types of structures. The first step is the development of ground-motion models, which are used for forecasting ground-motion intensity measures (IM) given source characteristics, source-to-site distance, and local site condition for future events. IMs such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available. Second, it is investigated how machine learning techniques could be beneficial for developing probabilistic seismic demand models (PSDMs), which provide the relationship between the structural demand responses (e.g., component deformations, accelerations, internal forces, etc.) and the ground motion IMs. In the risk framework, such models are used to develop fragility curves estimating exceeding probability of damage for pre-defined limit states, and therefore, control the reliability of the predictions in the risk assessment. In this study, machine learning algorithms like artificial neural network, random forest, and support vector machine are adopted and trained on the demand parameters to derive PSDMs for them. It is observed that such models can provide more accurate estimates of prediction in relatively shorter about of time compared to conventional methods. Moreover, they can be used for sensitivity analysis of fragility curves with respect to many modeling parameters without necessarily requiring more intense numerical response-history analysis.Keywords: artificial neural network, machine learning, random forest, seismic risk analysis, seismic hazard analysis, support vector machine
Procedia PDF Downloads 10613432 A Study on Factors Affecting (Building Information Modelling) BIM Implementation in European Renovation Projects
Authors: Fatemeh Daneshvartarigh
Abstract:
New technologies and applications have radically altered construction techniques in recent years. In order to anticipate how the building will act, perform, and appear, these technologies encompass a wide range of visualization, simulation, and analytic tools. These new technologies and applications have a considerable impact on completing construction projects in today's (architecture, engineering and construction)AEC industries. The rate of changes in BIM-related topics is different worldwide, and it depends on many factors, e.g., the national policies of each country. Therefore, there is a need for comprehensive research focused on a specific area with common characteristics. Therefore, one of the necessary measures to increase the use of this new approach is to examine the challenges and obstacles facing it. In this research, based on the Delphi method, at first, the background and related literature are reviewed. Then, using the knowledge obtained from the literature, a primary questionnaire is generated and filled by experts who are selected using snowball sampling. It covered the experts' attitudes towards implementing BIM in renovation projects and their view of the benefits and obstacles in this regard. By analyzing the primary questionnaire, the second group of experts is selected among the participants to be interviewed. The results are analyzed using Theme analysis. Six themes, including Management support, staff resistance, client willingness, Cost of software and implementation, the difficulty of implementation, and other reasons, are obtained. Then a final questionnaire is generated from the themes and filled by the same group of experts. The result is analyzed by the Fuzzy Delphi method, showing the exact ranking of the obtained themes. The final results show that management support, staff resistance, and client willingness are the most critical barrier to BIM usage in renovation projects.Keywords: building information modeling, BIM, BIM implementation, BIM barriers, BIM in renovation
Procedia PDF Downloads 16713431 Feature Evaluation Based on Random Subspace and Multiple-K Ensemble
Authors: Jaehong Yu, Seoung Bum Kim
Abstract:
Clustering analysis can facilitate the extraction of intrinsic patterns in a dataset and reveal its natural groupings without requiring class information. For effective clustering analysis in high dimensional datasets, unsupervised dimensionality reduction is an important task. Unsupervised dimensionality reduction can generally be achieved by feature extraction or feature selection. In many situations, feature selection methods are more appropriate than feature extraction methods because of their clear interpretation with respect to the original features. The unsupervised feature selection can be categorized as feature subset selection and feature ranking method, and we focused on unsupervised feature ranking methods which evaluate the features based on their importance scores. Recently, several unsupervised feature ranking methods were developed based on ensemble approaches to achieve their higher accuracy and stability. However, most of the ensemble-based feature ranking methods require the true number of clusters. Furthermore, these algorithms evaluate the feature importance depending on the ensemble clustering solution, and they produce undesirable evaluation results if the clustering solutions are inaccurate. To address these limitations, we proposed an ensemble-based feature ranking method with random subspace and multiple-k ensemble (FRRM). The proposed FRRM algorithm evaluates the importance of each feature with the random subspace ensemble, and all evaluation results are combined with the ensemble importance scores. Moreover, FRRM does not require the determination of the true number of clusters in advance through the use of the multiple-k ensemble idea. Experiments on various benchmark datasets were conducted to examine the properties of the proposed FRRM algorithm and to compare its performance with that of existing feature ranking methods. The experimental results demonstrated that the proposed FRRM outperformed the competitors.Keywords: clustering analysis, multiple-k ensemble, random subspace-based feature evaluation, unsupervised feature ranking
Procedia PDF Downloads 33913430 Fluorescence-Based Biosensor for Dopamine Detection Using Quantum Dots
Authors: Sylwia Krawiec, Joanna Cabaj, Karol Malecha
Abstract:
Nowadays, progress in the field of the analytical methods is of great interest for reliable biological research and medical diagnostics. Classical techniques of chemical analysis, despite many advantages, do not permit to obtain immediate results or automatization of measurements. Chemical sensors have displaced the conventional analytical methods - sensors combine precision, sensitivity, fast response and the possibility of continuous-monitoring. Biosensor is a chemical sensor, which except of conventer also possess a biologically active material, which is the basis for the detection of specific chemicals in the sample. Each biosensor device mainly consists of two elements: a sensitive element, where is recognition of receptor-analyte, and a transducer element which receives the signal and converts it into a measurable signal. Through these two elements biosensors can be divided in two categories: due to the recognition element (e.g immunosensor) and due to the transducer (e.g optical sensor). Working of optical sensor is based on measurements of quantitative changes of parameters characterizing light radiation. The most often analyzed parameters include: amplitude (intensity), frequency or polarization. Changes in the optical properties one of the compound which reacts with biological material coated on the sensor is analyzed by a direct method, in an indirect method indicators are used, which changes the optical properties due to the transformation of the testing species. The most commonly used dyes in this method are: small molecules with an aromatic ring, like rhodamine, fluorescent proteins, for example green fluorescent protein (GFP), or nanoparticles such as quantum dots (QDs). Quantum dots have, in comparison with organic dyes, much better photoluminescent properties, better bioavailability and chemical inertness. These are semiconductor nanocrystals size of 2-10 nm. This very limited number of atoms and the ‘nano’-size gives QDs these highly fluorescent properties. Rapid and sensitive detection of dopamine is extremely important in modern medicine. Dopamine is very important neurotransmitter, which mainly occurs in the brain and central nervous system of mammals. Dopamine is responsible for the transmission information of moving through the nervous system and plays an important role in processes of learning or memory. Detection of dopamine is significant for diseases associated with the central nervous system such as Parkinson or schizophrenia. In developed optical biosensor for detection of dopamine, are used graphene quantum dots (GQDs). In such sensor dopamine molecules coats the GQD surface - in result occurs quenching of fluorescence due to Resonance Energy Transfer (FRET). Changes in fluorescence correspond to specific concentrations of the neurotransmitter in tested sample, so it is possible to accurately determine the concentration of dopamine in the sample.Keywords: biosensor, dopamine, fluorescence, quantum dots
Procedia PDF Downloads 36413429 An Unsupervised Domain-Knowledge Discovery Framework for Fake News Detection
Authors: Yulan Wu
Abstract:
With the rapid development of social media, the issue of fake news has gained considerable prominence, drawing the attention of both the public and governments. The widespread dissemination of false information poses a tangible threat across multiple domains of society, including politics, economy, and health. However, much research has concentrated on supervised training models within specific domains, their effectiveness diminishes when applied to identify fake news across multiple domains. To solve this problem, some approaches based on domain labels have been proposed. By segmenting news to their specific area in advance, judges in the corresponding field may be more accurate on fake news. However, these approaches disregard the fact that news records can pertain to multiple domains, resulting in a significant loss of valuable information. In addition, the datasets used for training must all be domain-labeled, which creates unnecessary complexity. To solve these problems, an unsupervised domain knowledge discovery framework for fake news detection is proposed. Firstly, to effectively retain the multidomain knowledge of the text, a low-dimensional vector for each news text to capture domain embeddings is generated. Subsequently, a feature extraction module utilizing the unsupervisedly discovered domain embeddings is used to extract the comprehensive features of news. Finally, a classifier is employed to determine the authenticity of the news. To verify the proposed framework, a test is conducted on the existing widely used datasets, and the experimental results demonstrate that this method is able to improve the detection performance for fake news across multiple domains. Moreover, even in datasets that lack domain labels, this method can still effectively transfer domain knowledge, which can educe the time consumed by tagging without sacrificing the detection accuracy.Keywords: fake news, deep learning, natural language processing, multiple domains
Procedia PDF Downloads 9713428 Grey Relational Analysis Coupled with Taguchi Method for Process Parameter Optimization of Friction Stir Welding on 6061 AA
Authors: Eyob Messele Sefene, Atinkut Atinafu Yilma
Abstract:
The highest strength-to-weight ratio criterion has fascinated increasing curiosity in virtually all areas where weight reduction is indispensable. One of the recent advances in manufacturing to achieve this intention endears friction stir welding (FSW). The process is widely used for joining similar and dissimilar non-ferrous materials. In FSW, the mechanical properties of the weld joints are impelled by property-selected process parameters. This paper presents verdicts of optimum process parameters in attempting to attain enhanced mechanical properties of the weld joint. The experiment was conducted on a 5 mm 6061 aluminum alloy sheet. A butt joint configuration was employed. Process parameters, rotational speed, traverse speed or feed rate, axial force, dwell time, tool material and tool profiles were utilized. Process parameters were also optimized, making use of a mixed L18 orthogonal array and the Grey relation analysis method with larger is better quality characteristics. The mechanical properties of the weld joint are examined through the tensile test, hardness test and liquid penetrant test at ambient temperature. ANOVA was conducted in order to investigate the significant process parameters. This research shows that dwell time, rotational speed, tool shape, and traverse speed have become significant, with a joint efficiency of about 82.58%. Nine confirmatory tests are conducted, and the results indicate that the average values of the grey relational grade fall within the 99% confidence interval. Hence the experiment is proven reliable.Keywords: friction stir welding, optimization, 6061 AA, Taguchi
Procedia PDF Downloads 10113427 Stochastic Nuisance Flood Risk for Coastal Areas
Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong
Abstract:
The U.S. Federal Emergency Management Agency (FEMA) developed flood maps based on experts’ experience and estimates of the probability of flooding. Current flood-risk models evaluate flood risk with regional and subjective measures without impact from torrential rain and nuisance flooding at the neighborhood level. Nuisance flooding occurs in small areas in the community, where a few streets or blocks are routinely impacted. This type of flooding event occurs when torrential rainstorm combined with high tide and sea level rise temporarily exceeds a given threshold. In South Florida, this threshold is 1.7 ft above Mean Higher High Water (MHHW). The National Weather Service defines torrential rain as rain deposition at a rate greater than 0.3-inches per hour or three inches in a single day. Data from the Florida Climate Center, 1970 to 2020, shows 371 events with more than 3-inches of rain in a day in 612 months. The purpose of this research is to develop a data-driven method to determine comprehensive analytical damage-avoidance criteria that account for nuisance flood events at the single-family home level. The method developed uses the Failure Mode and Effect Analysis (FMEA) method from the American Society of Quality (ASQ) to estimate the Damage Avoidance (DA) preparation for a 1-day 100-year storm. The Consequence of Nuisance Flooding (CoNF) is estimated from community mitigation efforts to prevent nuisance flooding damage. The Probability of Nuisance Flooding (PoNF) is derived from the frequency and duration of torrential rainfall causing delays and community disruptions to daily transportation, human illnesses, and property damage. Urbanization and population changes are related to the U.S. Census Bureau's annual population estimates. Data collected by the United States Department of Agriculture (USDA) Natural Resources Conservation Service’s National Resources Inventory (NRI) and locally by the South Florida Water Management District (SFWMD) track the development and land use/land cover changes with time. The intent is to include temporal trends in population density growth and the impact on land development. Results from this investigation provide the risk of nuisance flooding as a function of CoNF and PoNF for coastal areas of South Florida. The data-based criterion provides awareness to local municipalities on their flood-risk assessment and gives insight into flood management actions and watershed development.Keywords: flood risk, nuisance flooding, urban flooding, FMEA
Procedia PDF Downloads 10013426 Characterization of Thin Woven Composites Used in Printed Circuit Boards by Combining Numerical and Experimental Approaches
Authors: Gautier Girard, Marion Martiny, Sebastien Mercier, Mohamad Jrad, Mohamed-Slim Bahi, Laurent Bodin, Francois Lechleiter, David Nevo, Sophie Dareys
Abstract:
Reliability of electronic devices has always been of highest interest for Aero-MIL and space applications. In any electronic device, Printed Circuit Board (PCB), providing interconnection between components, is a key for reliability. During the last decades, PCB technologies evolved to sustain and/or fulfill increased original equipment manufacturers requirements and specifications, higher densities and better performances, faster time to market and longer lifetime, newer material and mixed buildups. From the very beginning of the PCB industry up to recently, qualification, experiments and trials, and errors were the most popular methods to assess system (PCB) reliability. Nowadays OEM, PCB manufacturers and scientists are working together in a close relationship in order to develop predictive models for PCB reliability and lifetime. To achieve that goal, it is fundamental to characterize precisely base materials (laminates, electrolytic copper, …), in order to understand failure mechanisms and simulate PCB aging under environmental constraints by means of finite element method for example. The laminates are woven composites and have thus an orthotropic behaviour. The in-plane properties can be measured by combining classical uniaxial testing and digital image correlation. Nevertheless, the out-of-plane properties cannot be evaluated due to the thickness of the laminate (a few hundred of microns). It has to be noted that the knowledge of the out-of-plane properties is fundamental to investigate the lifetime of high density printed circuit boards. A homogenization method combining analytical and numerical approaches has been developed in order to obtain the complete elastic orthotropic behaviour of a woven composite from its precise 3D internal structure and its experimentally measured in-plane elastic properties. Since the mechanical properties of the resin surrounding the fibres are unknown, an inverse method is proposed to estimate it. The methodology has been applied to one laminate used in hyperfrequency spatial applications in order to get its elastic orthotropic behaviour at different temperatures in the range [-55°C; +125°C]. Next; numerical simulations of a plated through hole in a double sided PCB are performed. Results show the major importance of the out-of-plane properties and the temperature dependency of these properties on the lifetime of a printed circuit board. Acknowledgements—The support of the French ANR agency through the Labcom program ANR-14-LAB7-0003-01, support of CNES, Thales Alenia Space and Cimulec is acknowledged.Keywords: homogenization, orthotropic behaviour, printed circuit board, woven composites
Procedia PDF Downloads 204