Search results for: lumped parameter method
13105 Seismic Assessment of Flat Slab and Conventional Slab System for Irregular Building Equipped with Shear Wall
Authors: Muhammad Aji Fajari, Ririt Aprilin Sumarsono
Abstract:
Particular instability of structural building under lateral load (e.g earthquake) will rise due to irregularity in vertical and horizontal direction as stated in SNI 03-1762-2012. The conventional slab has been considered for its less contribution in increasing the stability of the structure, except special slab system such as flat slab turned into account. In this paper, the analysis of flat slab system at Sequis Tower located in South Jakarta will be assessed its performance under earthquake. It consists of 6 floors of the basement where the flat slab system is applied. The flat slab system will be the main focus in this paper to be compared for its performance with conventional slab system under earthquake. Regarding the floor plan of Sequis Tower basement, re-entrant corner signed for this building is 43.21% which exceeded the allowable re-entrant corner is 15% as stated in ASCE 7-05 Based on that, the horizontal irregularity will be another concern for analysis, otherwise vertical irregularity does not exist for this building. Flat slab system is a system where the slabs use drop panel with shear head as their support instead of using beams. Major advantages of flat slab application are decreasing dead load of structure, removing beams so that the clear height can be maximized, and providing lateral resistance due to lateral load. Whilst, deflection at middle strip and punching shear are problems to be detail considered. Torsion usually appears when the structural member under flexure such as beam or column dimension is improper in ratio. Considering flat slab as alternative slab system will keep the collapse due to torsion down. Common seismic load resisting system applied in the building is a shear wall. Installation of shear wall will keep the structural system stronger and stiffer affecting in reduced displacement under earthquake. Eccentricity of shear wall location of this building resolved the instability due to horizontal irregularity so that the earthquake load can be absorbed. Performing linear dynamic analysis such as response spectrum and time history analysis due to earthquake load is suitable as the irregularity arise so that the performance of structure can be significantly observed. Utilization of response spectrum data for South Jakarta which PGA 0.389g is basic for the earthquake load idealization to be involved in several load combinations stated on SNI 03-1726-2012. The analysis will result in some basic seismic parameters such as period, displacement, and base shear of the system; besides the internal forces of the critical member will be presented. Predicted period of a structure under earthquake load is 0.45 second, but as different slab system applied in the analysis then the period will show a different value. Flat slab system will probably result in better performance for the displacement parameter compare to conventional slab system due to higher contribution of stiffness to the whole system of the building. In line with displacement, the deflection of the slab will result smaller for flat slab than a conventional slab. Henceforth, shear wall will be effective to strengthen the conventional slab system than flat slab system.Keywords: conventional slab, flat slab, horizontal irregularity, response spectrum, shear wall
Procedia PDF Downloads 19113104 Calculation of Electronic Structures of Nickel in Interaction with Hydrogen by Density Functional Theoretical (DFT) Method
Authors: Choukri Lekbir, Mira Mokhtari
Abstract:
Hydrogen-Materials interaction and mechanisms can be modeled at nano scale by quantum methods. In this work, the effect of hydrogen on the electronic properties of a cluster material model «nickel» has been studied by using of density functional theoretical (DFT) method. Two types of clusters are optimized: Nickel and hydrogen-nickel system. In the case of nickel clusters (n = 1-6) without presence of hydrogen, three types of electronic structures (neutral, cationic and anionic), have been optimized according to three basis sets calculations (B3LYP/LANL2DZ, PW91PW91/DGDZVP2, PBE/DGDZVP2). The comparison of binding energies and bond lengths of the three structures of nickel clusters (neutral, cationic and anionic) obtained by those basis sets, shows that the results of neutral and anionic nickel clusters are in good agreement with the experimental results. In the case of neutral and anionic nickel clusters, comparing energies and bond lengths obtained by the three bases, shows that the basis set PBE/DGDZVP2 is most suitable to experimental results. In the case of anionic nickel clusters (n = 1-6) with presence of hydrogen, the optimization of the hydrogen-nickel (anionic) structures by using of the basis set PBE/DGDZVP2, shows that the binding energies and bond lengths increase compared to those obtained in the case of anionic nickel clusters without the presence of hydrogen, that reveals the armor effect exerted by hydrogen on the electronic structure of nickel, which due to the storing of hydrogen energy within nickel clusters structures. The comparison between the bond lengths for both clusters shows the expansion effect of clusters geometry which due to hydrogen presence.Keywords: binding energies, bond lengths, density functional theoretical, geometry optimization, hydrogen energy, nickel cluster
Procedia PDF Downloads 42213103 Treatment of Interferograms Image of Perturbation Processes in Metallic Samples by Optical Method
Authors: Daira Radouane, Naim Boudmagh, Hamada Adel
Abstract:
The but of this handling is to use the technique of the shearing with a mechanism lapping machine of image: a prism of Wollaston. We want to characterize this prism in order to be able to employ it later on in an analysis by shearing. A prism of Wollaston is a prism produced in a birefringent material i.e. having two indexes of refraction. This prism is cleaved so as to present the directions associated with these indices in its face with entry. It should be noted that these directions are perpendicular between them.Keywords: non destructive control, aluminium, interferometry, treatment of image
Procedia PDF Downloads 33113102 Visualization Tool for EEG Signal Segmentation
Authors: Sweeti, Anoop Kant Godiyal, Neha Singh, Sneh Anand, B. K. Panigrahi, Jayasree Santhosh
Abstract:
This work is about developing a tool for visualization and segmentation of Electroencephalograph (EEG) signals based on frequency domain features. Change in the frequency domain characteristics are correlated with change in mental state of the subject under study. Proposed algorithm provides a way to represent the change in the mental states using the different frequency band powers in form of segmented EEG signal. Many segmentation algorithms have been suggested in literature having application in brain computer interface, epilepsy and cognition studies that have been used for data classification. But the proposed method focusses mainly on the better presentation of signal and that’s why it could be a good utilization tool for clinician. Algorithm performs the basic filtering using band pass and notch filters in the range of 0.1-45 Hz. Advanced filtering is then performed by principal component analysis and wavelet transform based de-noising method. Frequency domain features are used for segmentation; considering the fact that the spectrum power of different frequency bands describes the mental state of the subject. Two sliding windows are further used for segmentation; one provides the time scale and other assigns the segmentation rule. The segmented data is displayed second by second successively with different color codes. Segment’s length can be selected as per need of the objective. Proposed algorithm has been tested on the EEG data set obtained from University of California in San Diego’s online data repository. Proposed tool gives a better visualization of the signal in form of segmented epochs of desired length representing the power spectrum variation in data. The algorithm is designed in such a way that it takes the data points with respect to the sampling frequency for each time frame and so it can be improved to use in real time visualization with desired epoch length.Keywords: de-noising, multi-channel data, PCA, power spectra, segmentation
Procedia PDF Downloads 39713101 Structural Inequality and Precarious Workforce: The Role of Labor Laws in Destabilizing the Labor Force in Iran
Authors: Iman Shabanzadeh
Abstract:
Over the last three decades, the main demands of the Iranian workforce have been focused on three areas: "The right to a decent wage", "The right to organize" and "The right to job security". In order to investigate and analyze this situation, the present study focuses on the component of job security. The purpose of the study is to figure out what mechanisms in Iran's Labor Law have led to the destabilization and undermining of workers' job security. The research method is descriptive-analytical. To collect information, library and document sources in the field of laws related to labor rights in Iran and, semi-structured interviews with experts have been used. In the data analysis stage, the qualitative content analysis method was also used. The trend analysis of the statistics related to the labor force situation in Iran in the last three decades shows that the employment structure has been facing an increase in the active population, but in the last decade, a large part of this population has been mainly active in the service sector, and contract-free enterprises, so a smaller share of this employment has insurance coverage and a larger share has underemployment. In this regard, the results of this study show that four contexts have been proposed as the main legal and executive mechanisms of labor instability in Iran, which are: 1) temporaryization of the labor force by providing different interpretations of labor law, 2) adjustment labor in the public sector and the emergence of manpower contracting companies, 3) the cessation of labor law protection of workers in small workshops and 4) the existence of numerous restrictions on the effective organization of workers. The theoretical conclusion of this article is that the main root of the challenges of the labor society and the destabilized workforce in Iran is the existence of structural inequalities in the field of labor security, whose traces can be seen in the legal provisions and executive regulations of this field.Keywords: inequality, precariat, temporaryization, labor force, labor law
Procedia PDF Downloads 6113100 Antihyperlipidemia Combination of Simvastatin and Herbal Drink (Conventional Drug Interaction Potential Study and Herbal As Prevention Adverse Effect on Combination Therapy Hyperlipidemia)
Authors: Gesti Prastiti, Maylina Adani, Yuyun darma A. N., M. Khilmi F., Yunita Wahyu Pratiwi
Abstract:
Combination therapy may allow interaction on two drugs or more that can give adverse effects on patients. Simvastatin is a drug of antihyperlipidemia it can interact with drugs which work on cytochrome P450 CYP3A4 because it can interfere the performance of simvastatin. Flavonoid found in plants can inhibit the cytochrome P450 CYP3A4 if taken with simvastatin and can increase simvastatin levels in the body and increases the potential side effects of simvastatin such as myopati and rhabdomyolysis. Green tea leaves and mint are herbal medicine which has the effect of antihiperlipidemia. This study aims to determine the potential interaction of simvastatin with herbal drinks (green tea leaves and mint). This research method are experimental post-test only control design. Test subjects were divided into 5 groups: normal group, negative control group, simvastatin group, a combination of green tea group and the combination group mint leaves. The study was conducted over 32 days and total cholesterol levels were analyzed by enzymatic colorimetric test method. Results of this study is the obtainment of average value of total cholesterol in each group, the normal group (65.92 mg/dL), the negative control group the average total cholesterol test in the normal group was (69.86 mg/dL), simvastatin group (58.96 mg/dL), the combination of green tea group (58.96 mg/dL), and the combination of mint leaves (63.68 mg/dL). The conclusion is between simvastatin combination therapy with herbal drinks have the potential for pharmacodynamic interactions with a synergistic effect, antagonist, and a powerful additive, so the combination therapy are no more effective than a single administration of simvastatin therapy.Keywords: hyperlipidemia, simvastatin, herbal drinks, green tea leaves, mint leaves, drug interactions
Procedia PDF Downloads 39513099 Highly Efficient Ca-Doped CuS Counter Electrodes for Quantum Dot Sensitized Solar Cells
Authors: Mohammed Panthakkal Abdul Muthalif, Shanmugasundaram Kanagaraj, Jumi Park, Hangyu Park, Youngson Choe
Abstract:
The present study reports the incorporation of calcium ions into the CuS counter electrodes (CEs) in order to modify the photovoltaic performance of quantum dot-sensitized solar cells (QDSSCs). Metal ion-doped CuS thin film was prepared by the chemical bath deposition (CBD) method on FTO substrate and used directly as counter electrodes for TiO₂/CdS/CdSe/ZnS photoanodes based QDSSCs. For the Ca-doped CuS thin films, copper nitrate and thioacetamide were used as anionic and cationic precursors. Calcium nitrate tetrahydrate was used as doping material. The surface morphology of Ca-doped CuS CEs indicates that the fragments are uniformly distributed, and the structure is densely packed with high crystallinity. The changes observed in the diffraction patterns suggest that Ca dopant can introduce increased disorder into CuS material structure. EDX analysis was employed to determine the elemental identification, and the results confirmed the presence of Cu, S, and Ca on the FTO glass substrate. The photovoltaic current density – voltage characteristics of Ca-doped CuS CEs shows the specific improvements in open circuit voltage decay (Voc) and short-circuit current density (Jsc). Electrochemical impedance spectroscopy results display that Ca-doped CuS CEs have greater electrocatalytic activity and charge transport capacity than bare CuS. All the experimental results indicate that 20% Ca-doped CuS CE based QDSSCs exhibit high power conversion efficiency (η) of 4.92%, short circuit current density of 15.47 mA cm⁻², open circuit photovoltage of 0.611 V, and fill factor (FF) of 0.521 under illumination of one sun.Keywords: Ca-doped CuS counter electrodes, surface morphology, chemical bath deposition method, electrocatalytic activity
Procedia PDF Downloads 16413098 Some Issues with Extension of an HPC Cluster
Authors: Pil Seong Park
Abstract:
Homemade HPC clusters are widely used in many small labs, because they are easy to build and cost-effective. Even though incremental growth is an advantage of clusters, it results in heterogeneous systems anyhow. Instead of adding new nodes to the cluster, we can extend clusters to include some other Internet servers working independently on the same LAN, so that we can make use of their idle times, especially during the night. However extension across a firewall raises some security problems with NFS. In this paper, we propose a method to solve such a problem using SSH tunneling, and suggest a modified structure of the cluster that implements it.Keywords: extension of HPC clusters, security, NFS, SSH tunneling
Procedia PDF Downloads 42613097 Performance Comparison of Droop Control Methods for Parallel Inverters in Microgrid
Authors: Ahmed Ismail, Mustafa Baysal
Abstract:
Although the energy source in the world is mainly based on fossil fuels today, there is a need for alternative energy generation systems, which are more economic and environmentally friendly, due to continuously increasing demand of electric energy and lacking power resources and networks. Distributed Energy Resources (DERs) such as fuel cells, wind and solar power have recently become widespread as alternative generation. In order to solve several problems that might be encountered when integrating DERs to power system, the microgrid concept has been proposed. A microgrid can operate both grid connected and island mode to benefit both utility and customers. For most distributed energy resources (DER) which are connected in parallel in LV-grid like micro-turbines, wind plants, fuel cells and PV cells electrical power is generated as a direct current (DC) and converted to an alternative currents (AC) by inverters. So the inverters are assumed to be primary components in a microgrid. There are many control techniques of parallel inverters to manage active and reactive sharing of the loads. Some of them are based on droop method. In literature, the studies are usually focused on improving the transient performance of inverters. In this study, the performance of two different controllers based on droop control method is compared for the inverters operated in parallel without any communication feedback. For this aim, a microgrid in which inverters are controlled by conventional droop controller and modified droop controller is designed. Modified controller is obtained by adding PID into conventional droop control. Active and reactive power sharing performance, voltage and frequency responses of those control methods are measured in several operational cases. Study cases have been simulated by MATLAB-SIMULINK.Keywords: active and reactive power sharing, distributed generation, droop control, microgrid
Procedia PDF Downloads 59213096 Organic Rankine Cycles (ORC) for Mobile Applications: Economic Feasibility in Different Transportation Sectors
Authors: Roberto Pili, Alessandro Romagnoli, Hartmut Spliethoff, Christoph Wieland
Abstract:
Internal combustion engines (ICE) are today the most common energy system to drive vehicles and transportation systems. Numerous studies state that 50-60% of the fuel energy content is lost to the ambient as sensible heat. ORC offers a valuable alternative to recover such waste heat from ICE, leading to fuel energy savings and reduced emissions. In contrast, the additional weight of the ORC affects the net energy balance of the overall system and the ORC occupies additional volume that competes with vehicle transportation capacity. Consequently, a lower income from delivered freight or passenger tickets can be achieved. The economic feasibility of integrating an ORC into an ICE and the resulting economic impact of weight and volume have not been analyzed in open literature yet. This work intends to define such a benchmark for ORC applications in the transportation sector and investigates the current situation on the market. The applied methodology refers to the freight market, but it can be extended to passenger transportation as well. The economic parameter X is defined as the ratio between the variation of the freight revenues and the variation of fuel costs when an ORC is installed as a bottoming cycle for an ICE with respect to a reference case without ORC. A good economic situation is obtained when the reduction in fuel costs is higher than the reduction of revenues for the delivered freight, i.e. X<1. Through this constraint, a maximum allowable change of transport capacity for a given relative reduction in fuel consumption is determined. The specific fuel consumption is influenced by the ORC in two ways. Firstly because the transportable freight is reduced and secondly because the total weight of the vehicle is increased. Note, that the generated electricity of the ORC influences the size of the ICE and the fuel consumption as well. Taking the above dependencies into account, the limiting condition X = 1 results in a second order equation for the relative change in transported cargo. The described procedure is carried out for a typical city bus, a truck of 24-40 t of payload capacity, a middle-size freight train (1000 t), an inland water vessel (Va RoRo, 2500 t) and handysize-like vessel (25000 t). The maximum allowable mass and volume of the ORC are calculated in dependence of its efficiency in order to satisfy X < 1. Subsequently, these values are compared with weight and volume of commercial ORC products. For ships of any size, the situation appears already highly favorable. A different result is obtained for road and rail vehicles. For trains, the mass and the volume of common ORC products have to be reduced at least by 50%. For trucks and buses, the situation looks even worse. The findings of the present study show a theoretical and practical approach for the economic application of ORC in the transportation sector. In future works, the potential for volume and mass reduction of the ORC will be addressed, together with the integration of an economic assessment for the ORC.Keywords: ORC, transportation, volume, weight
Procedia PDF Downloads 22713095 Clustering-Based Threshold Model for Condition Rating of Concrete Bridge Decks
Authors: M. Alsharqawi, T. Zayed, S. Abu Dabous
Abstract:
To ensure safety and serviceability of bridge infrastructure, accurate condition assessment and rating methods are needed to provide basis for bridge Maintenance, Repair and Replacement (MRR) decisions. In North America, the common practices to assess condition of bridges are through visual inspection. These practices are limited to detect surface defects and external flaws. Further, the thresholds that define the severity of bridge deterioration are selected arbitrarily. The current research discusses the main deteriorations and defects identified during visual inspection and Non-Destructive Evaluation (NDE). NDE techniques are becoming popular in augmenting the visual examination during inspection to detect subsurface defects. Quality inspection data and accurate condition assessment and rating are the basis for determining appropriate MRR decisions. Thus, in this paper, a novel method for bridge condition assessment using the Quality Function Deployment (QFD) theory is utilized. The QFD model is designed to provide an integrated condition by evaluating both the surface and subsurface defects for concrete bridges. Moreover, an integrated condition rating index with four thresholds is developed based on the QFD condition assessment model and using K-means clustering technique. Twenty case studies are analyzed by applying the QFD model and implementing the developed rating index. The results from the analyzed case studies show that the proposed threshold model produces robust MRR recommendations consistent with decisions and recommendations made by bridge managers on these projects. The proposed method is expected to advance the state of the art of bridges condition assessment and rating.Keywords: concrete bridge decks, condition assessment and rating, quality function deployment, k-means clustering technique
Procedia PDF Downloads 22413094 A Study on the Impact of Covid-19 on Primary Healthcare Workers in Ekiti State, South-West Nigeria
Authors: Adeyinka Adeniran, Omowunmi Bakare, Esther Oluwole, Florence Chieme, Temitope Durojaiye, Modupe Akinyinka, Omobola Ojo, Babatunde Olujobi, Marcus Ilesanmi, Akintunde Ogunsakin
Abstract:
Introduction: Globally, COVID-19 has greatly impacted the human race physically, socially, mentally, and economically. However, healthcare workers seemed to bear the greatest impact. The study, therefore, sought to assess the impact of COVID-19 on the primary healthcare workers in Ekiti, South-west Nigeria. Methods: The study was a cross-sectional descriptive study using a quantitative data collection method of 716 primary healthcare workers in Ekiti state. Respondents were selected using an online convenience sampling method via their social media platforms. Data was collected, collated, and analyzed using SPSS version 25 software and presented as frequency tables, mean and standard deviation. Bivariate and multivariate analyses were conducted using a t-test, and the level of statistical significance was set at p<0.05. Results: Less than half (47.1%) of respondents were between 41-50 age group and a mean age of 44.4+6.4SD. A majority (89.4%) were female, and almost all (96.2%) were married. More than (90%) had ever heard of Coronavirus, and (85.8%) had to spend more money on activities of daily living such as transportation (90.1%), groceries (80.6%), assisting relations (95.8%) and sanitary measures (disinfection) at home (95.0%). COVID-19 had a huge negative impact on about (89.7%) of healthcare workers, with a mean score of 22+4.8. Conclusion: COVID-19 negatively impacted the daily living and professional duties of primary healthcare workers, which reflected their psychological, physical, social, and economic well-being. Disease outbreaks are unlikely to disappear in the near future. Hence, global proactive interventions and homegrown measures should be adopted to protect healthcare workers and save lives.Keywords: Covid-19, health workforce, primary health care, health systems, depression
Procedia PDF Downloads 8413093 Using the ISO 9705 Room Corner Test for Smoke Toxicity Quantification of Polyurethane
Authors: Gabrielle Peck, Ryan Hayes
Abstract:
Polyurethane (PU) foam is typically sold as acoustic foam that is often used as sound insulation in settings such as night clubs and bars. As a construction product, PU is tested by being glued to the walls and ceiling of the ISO 9705 room corner test room. However, when heat is applied to PU foam, it melts and burns as a pool fire due to it being a thermoplastic. The current test layout is unable to accurately measure mass loss and doesn’t allow for the material to burn as a pool fire without seeping out of the test room floor. The lack of mass loss measurement means gas yields pertaining to smoke toxicity analysis can’t be calculated, which makes data comparisons from any other material or test method difficult. Additionally, the heat release measurements are not representative of the actual measurements taken as a lot of the material seeps through the floor (when a tray to catch the melted material is not used). This research aimed to modify the ISO 9705 test to provide the ability to measure mass loss to allow for better calculation of gas yields and understanding of decomposition. It also aimed to accurately measure smoke toxicity in both the doorway and duct and enable dilution factors to be calculated. Finally, the study aimed to examine if doubling the fuel loading would force under-ventilated flaming. The test layout was modified to be a combination of the SBI (single burning item) test set up inside oof the ISO 9705 test room. Polyurethane was tested in two different ways with the aim of altering the ventilation condition of the tests. Test one was conducted using 1 x SBI test rig aiming for well-ventilated flaming. Test two was conducted using 2 x SBI rigs (facing each other inside the test room) (doubling the fuel loading) aiming for under-ventilated flaming. The two different configurations used were successful in achieving both well-ventilated flaming and under-ventilated flaming, shown by the measured equivalence ratios (measured using a phi meter designed and created for these experiments). The findings show that doubling the fuel loading will successfully force under-ventilated flaming conditions to be achieved. This method can therefore be used when trying to replicate post-flashover conditions in future ISO 9705 room corner tests. The radiative heat generated by the two SBI rigs facing each other facilitated a much higher overall heat release resulting in a more severe fire. The method successfully allowed for accurate measurement of smoke toxicity produced from the PU foam in terms of simple gases such as oxygen depletion, CO and CO2. Overall, the proposed test modifications improve the ability to measure the smoke toxicity of materials in different fire conditions on a large-scale.Keywords: flammability, ISO9705, large-scale testing, polyurethane, smoke toxicity
Procedia PDF Downloads 7613092 Machine Learning Techniques in Seismic Risk Assessment of Structures
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this work is to evaluate the advantages and disadvantages of various machine learning techniques in two key steps of seismic hazard and risk assessment of different types of structures. The first step is the development of ground-motion models, which are used for forecasting ground-motion intensity measures (IM) given source characteristics, source-to-site distance, and local site condition for future events. IMs such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available. Second, it is investigated how machine learning techniques could be beneficial for developing probabilistic seismic demand models (PSDMs), which provide the relationship between the structural demand responses (e.g., component deformations, accelerations, internal forces, etc.) and the ground motion IMs. In the risk framework, such models are used to develop fragility curves estimating exceeding probability of damage for pre-defined limit states, and therefore, control the reliability of the predictions in the risk assessment. In this study, machine learning algorithms like artificial neural network, random forest, and support vector machine are adopted and trained on the demand parameters to derive PSDMs for them. It is observed that such models can provide more accurate estimates of prediction in relatively shorter about of time compared to conventional methods. Moreover, they can be used for sensitivity analysis of fragility curves with respect to many modeling parameters without necessarily requiring more intense numerical response-history analysis.Keywords: artificial neural network, machine learning, random forest, seismic risk analysis, seismic hazard analysis, support vector machine
Procedia PDF Downloads 10613091 A Study on Factors Affecting (Building Information Modelling) BIM Implementation in European Renovation Projects
Authors: Fatemeh Daneshvartarigh
Abstract:
New technologies and applications have radically altered construction techniques in recent years. In order to anticipate how the building will act, perform, and appear, these technologies encompass a wide range of visualization, simulation, and analytic tools. These new technologies and applications have a considerable impact on completing construction projects in today's (architecture, engineering and construction)AEC industries. The rate of changes in BIM-related topics is different worldwide, and it depends on many factors, e.g., the national policies of each country. Therefore, there is a need for comprehensive research focused on a specific area with common characteristics. Therefore, one of the necessary measures to increase the use of this new approach is to examine the challenges and obstacles facing it. In this research, based on the Delphi method, at first, the background and related literature are reviewed. Then, using the knowledge obtained from the literature, a primary questionnaire is generated and filled by experts who are selected using snowball sampling. It covered the experts' attitudes towards implementing BIM in renovation projects and their view of the benefits and obstacles in this regard. By analyzing the primary questionnaire, the second group of experts is selected among the participants to be interviewed. The results are analyzed using Theme analysis. Six themes, including Management support, staff resistance, client willingness, Cost of software and implementation, the difficulty of implementation, and other reasons, are obtained. Then a final questionnaire is generated from the themes and filled by the same group of experts. The result is analyzed by the Fuzzy Delphi method, showing the exact ranking of the obtained themes. The final results show that management support, staff resistance, and client willingness are the most critical barrier to BIM usage in renovation projects.Keywords: building information modeling, BIM, BIM implementation, BIM barriers, BIM in renovation
Procedia PDF Downloads 16713090 Feature Evaluation Based on Random Subspace and Multiple-K Ensemble
Authors: Jaehong Yu, Seoung Bum Kim
Abstract:
Clustering analysis can facilitate the extraction of intrinsic patterns in a dataset and reveal its natural groupings without requiring class information. For effective clustering analysis in high dimensional datasets, unsupervised dimensionality reduction is an important task. Unsupervised dimensionality reduction can generally be achieved by feature extraction or feature selection. In many situations, feature selection methods are more appropriate than feature extraction methods because of their clear interpretation with respect to the original features. The unsupervised feature selection can be categorized as feature subset selection and feature ranking method, and we focused on unsupervised feature ranking methods which evaluate the features based on their importance scores. Recently, several unsupervised feature ranking methods were developed based on ensemble approaches to achieve their higher accuracy and stability. However, most of the ensemble-based feature ranking methods require the true number of clusters. Furthermore, these algorithms evaluate the feature importance depending on the ensemble clustering solution, and they produce undesirable evaluation results if the clustering solutions are inaccurate. To address these limitations, we proposed an ensemble-based feature ranking method with random subspace and multiple-k ensemble (FRRM). The proposed FRRM algorithm evaluates the importance of each feature with the random subspace ensemble, and all evaluation results are combined with the ensemble importance scores. Moreover, FRRM does not require the determination of the true number of clusters in advance through the use of the multiple-k ensemble idea. Experiments on various benchmark datasets were conducted to examine the properties of the proposed FRRM algorithm and to compare its performance with that of existing feature ranking methods. The experimental results demonstrated that the proposed FRRM outperformed the competitors.Keywords: clustering analysis, multiple-k ensemble, random subspace-based feature evaluation, unsupervised feature ranking
Procedia PDF Downloads 33913089 Fluorescence-Based Biosensor for Dopamine Detection Using Quantum Dots
Authors: Sylwia Krawiec, Joanna Cabaj, Karol Malecha
Abstract:
Nowadays, progress in the field of the analytical methods is of great interest for reliable biological research and medical diagnostics. Classical techniques of chemical analysis, despite many advantages, do not permit to obtain immediate results or automatization of measurements. Chemical sensors have displaced the conventional analytical methods - sensors combine precision, sensitivity, fast response and the possibility of continuous-monitoring. Biosensor is a chemical sensor, which except of conventer also possess a biologically active material, which is the basis for the detection of specific chemicals in the sample. Each biosensor device mainly consists of two elements: a sensitive element, where is recognition of receptor-analyte, and a transducer element which receives the signal and converts it into a measurable signal. Through these two elements biosensors can be divided in two categories: due to the recognition element (e.g immunosensor) and due to the transducer (e.g optical sensor). Working of optical sensor is based on measurements of quantitative changes of parameters characterizing light radiation. The most often analyzed parameters include: amplitude (intensity), frequency or polarization. Changes in the optical properties one of the compound which reacts with biological material coated on the sensor is analyzed by a direct method, in an indirect method indicators are used, which changes the optical properties due to the transformation of the testing species. The most commonly used dyes in this method are: small molecules with an aromatic ring, like rhodamine, fluorescent proteins, for example green fluorescent protein (GFP), or nanoparticles such as quantum dots (QDs). Quantum dots have, in comparison with organic dyes, much better photoluminescent properties, better bioavailability and chemical inertness. These are semiconductor nanocrystals size of 2-10 nm. This very limited number of atoms and the ‘nano’-size gives QDs these highly fluorescent properties. Rapid and sensitive detection of dopamine is extremely important in modern medicine. Dopamine is very important neurotransmitter, which mainly occurs in the brain and central nervous system of mammals. Dopamine is responsible for the transmission information of moving through the nervous system and plays an important role in processes of learning or memory. Detection of dopamine is significant for diseases associated with the central nervous system such as Parkinson or schizophrenia. In developed optical biosensor for detection of dopamine, are used graphene quantum dots (GQDs). In such sensor dopamine molecules coats the GQD surface - in result occurs quenching of fluorescence due to Resonance Energy Transfer (FRET). Changes in fluorescence correspond to specific concentrations of the neurotransmitter in tested sample, so it is possible to accurately determine the concentration of dopamine in the sample.Keywords: biosensor, dopamine, fluorescence, quantum dots
Procedia PDF Downloads 36413088 An Unsupervised Domain-Knowledge Discovery Framework for Fake News Detection
Authors: Yulan Wu
Abstract:
With the rapid development of social media, the issue of fake news has gained considerable prominence, drawing the attention of both the public and governments. The widespread dissemination of false information poses a tangible threat across multiple domains of society, including politics, economy, and health. However, much research has concentrated on supervised training models within specific domains, their effectiveness diminishes when applied to identify fake news across multiple domains. To solve this problem, some approaches based on domain labels have been proposed. By segmenting news to their specific area in advance, judges in the corresponding field may be more accurate on fake news. However, these approaches disregard the fact that news records can pertain to multiple domains, resulting in a significant loss of valuable information. In addition, the datasets used for training must all be domain-labeled, which creates unnecessary complexity. To solve these problems, an unsupervised domain knowledge discovery framework for fake news detection is proposed. Firstly, to effectively retain the multidomain knowledge of the text, a low-dimensional vector for each news text to capture domain embeddings is generated. Subsequently, a feature extraction module utilizing the unsupervisedly discovered domain embeddings is used to extract the comprehensive features of news. Finally, a classifier is employed to determine the authenticity of the news. To verify the proposed framework, a test is conducted on the existing widely used datasets, and the experimental results demonstrate that this method is able to improve the detection performance for fake news across multiple domains. Moreover, even in datasets that lack domain labels, this method can still effectively transfer domain knowledge, which can educe the time consumed by tagging without sacrificing the detection accuracy.Keywords: fake news, deep learning, natural language processing, multiple domains
Procedia PDF Downloads 9713087 Stochastic Nuisance Flood Risk for Coastal Areas
Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong
Abstract:
The U.S. Federal Emergency Management Agency (FEMA) developed flood maps based on experts’ experience and estimates of the probability of flooding. Current flood-risk models evaluate flood risk with regional and subjective measures without impact from torrential rain and nuisance flooding at the neighborhood level. Nuisance flooding occurs in small areas in the community, where a few streets or blocks are routinely impacted. This type of flooding event occurs when torrential rainstorm combined with high tide and sea level rise temporarily exceeds a given threshold. In South Florida, this threshold is 1.7 ft above Mean Higher High Water (MHHW). The National Weather Service defines torrential rain as rain deposition at a rate greater than 0.3-inches per hour or three inches in a single day. Data from the Florida Climate Center, 1970 to 2020, shows 371 events with more than 3-inches of rain in a day in 612 months. The purpose of this research is to develop a data-driven method to determine comprehensive analytical damage-avoidance criteria that account for nuisance flood events at the single-family home level. The method developed uses the Failure Mode and Effect Analysis (FMEA) method from the American Society of Quality (ASQ) to estimate the Damage Avoidance (DA) preparation for a 1-day 100-year storm. The Consequence of Nuisance Flooding (CoNF) is estimated from community mitigation efforts to prevent nuisance flooding damage. The Probability of Nuisance Flooding (PoNF) is derived from the frequency and duration of torrential rainfall causing delays and community disruptions to daily transportation, human illnesses, and property damage. Urbanization and population changes are related to the U.S. Census Bureau's annual population estimates. Data collected by the United States Department of Agriculture (USDA) Natural Resources Conservation Service’s National Resources Inventory (NRI) and locally by the South Florida Water Management District (SFWMD) track the development and land use/land cover changes with time. The intent is to include temporal trends in population density growth and the impact on land development. Results from this investigation provide the risk of nuisance flooding as a function of CoNF and PoNF for coastal areas of South Florida. The data-based criterion provides awareness to local municipalities on their flood-risk assessment and gives insight into flood management actions and watershed development.Keywords: flood risk, nuisance flooding, urban flooding, FMEA
Procedia PDF Downloads 10013086 Characterization of Thin Woven Composites Used in Printed Circuit Boards by Combining Numerical and Experimental Approaches
Authors: Gautier Girard, Marion Martiny, Sebastien Mercier, Mohamad Jrad, Mohamed-Slim Bahi, Laurent Bodin, Francois Lechleiter, David Nevo, Sophie Dareys
Abstract:
Reliability of electronic devices has always been of highest interest for Aero-MIL and space applications. In any electronic device, Printed Circuit Board (PCB), providing interconnection between components, is a key for reliability. During the last decades, PCB technologies evolved to sustain and/or fulfill increased original equipment manufacturers requirements and specifications, higher densities and better performances, faster time to market and longer lifetime, newer material and mixed buildups. From the very beginning of the PCB industry up to recently, qualification, experiments and trials, and errors were the most popular methods to assess system (PCB) reliability. Nowadays OEM, PCB manufacturers and scientists are working together in a close relationship in order to develop predictive models for PCB reliability and lifetime. To achieve that goal, it is fundamental to characterize precisely base materials (laminates, electrolytic copper, …), in order to understand failure mechanisms and simulate PCB aging under environmental constraints by means of finite element method for example. The laminates are woven composites and have thus an orthotropic behaviour. The in-plane properties can be measured by combining classical uniaxial testing and digital image correlation. Nevertheless, the out-of-plane properties cannot be evaluated due to the thickness of the laminate (a few hundred of microns). It has to be noted that the knowledge of the out-of-plane properties is fundamental to investigate the lifetime of high density printed circuit boards. A homogenization method combining analytical and numerical approaches has been developed in order to obtain the complete elastic orthotropic behaviour of a woven composite from its precise 3D internal structure and its experimentally measured in-plane elastic properties. Since the mechanical properties of the resin surrounding the fibres are unknown, an inverse method is proposed to estimate it. The methodology has been applied to one laminate used in hyperfrequency spatial applications in order to get its elastic orthotropic behaviour at different temperatures in the range [-55°C; +125°C]. Next; numerical simulations of a plated through hole in a double sided PCB are performed. Results show the major importance of the out-of-plane properties and the temperature dependency of these properties on the lifetime of a printed circuit board. Acknowledgements—The support of the French ANR agency through the Labcom program ANR-14-LAB7-0003-01, support of CNES, Thales Alenia Space and Cimulec is acknowledged.Keywords: homogenization, orthotropic behaviour, printed circuit board, woven composites
Procedia PDF Downloads 20413085 Understanding the Utilization of Luffa Cylindrica in the Adsorption of Heavy Metals to Clean Up Wastewater
Authors: Akanimo Emene, Robert Edyvean
Abstract:
In developing countries, a low cost method of wastewater treatment is highly recommended. Adsorption is an efficient and economically viable treatment process for wastewater. The utilisation of this process is based on the understanding of the relationship between the growth environment and the metal capacity of the biomaterial. Luffa cylindrica (LC), a plant material, was used as an adsorbent in adsorption design system of heavy metals. The chemically modified LC was used to adsorb heavy metals ions, lead and cadmium, from aqueous environmental solution at varying experimental conditions. Experimental factors, adsorption time, initial metal ion concentration, ionic strength and pH of solution were studied. The chemical nature and surface area of the tissues adsorbing heavy metals in LC biosorption systems were characterised by using electron microscopy and infra-red spectroscopy. It showed an increase in the surface area and improved adhesion capacity after chemical treatment. Metal speciation of the metal ions showed the binary interaction between the ions and the LC surface as the pH increases. Maximum adsorption was shown between pH 5 and pH 6. The ionic strength of the metal ion solution has an effect on the adsorption capacity based on the surface charge and the availability of the adsorption sites on the LC. The nature of the metal-surface complexes formed as a result of the experimental data were analysed with kinetic and isotherm models. The pseudo second order kinetic model and the two-site Langmuir isotherm model showed the best fit. Through the understanding of this process, there will be an opportunity to provide an alternative method for water purification. This will be provide an option, for when expensive water treatment technologies are not viable in developing countries.Keywords: adsorption, luffa cylindrica, metal-surface complexes, pH
Procedia PDF Downloads 8913084 Creating Database and Building 3D Geological Models: A Case Study on Bac Ai Pumped Storage Hydropower Project
Authors: Nguyen Chi Quang, Nguyen Duong Tri Nguyen
Abstract:
This article is the first step to research and outline the structure of the geotechnical database in the geological survey of a power project; in the context of this report creating the database that has been carried out for the Bac Ai pumped storage hydropower project. For the purpose of providing a method of organizing and storing geological and topographic survey data and experimental results in a spatial database, the RockWorks software is used to bring optimal efficiency in the process of exploiting, using, and analyzing data in service of the design work in the power engineering consulting. Three-dimensional (3D) geotechnical models are created from the survey data: such as stratigraphy, lithology, porosity, etc. The results of the 3D geotechnical model in the case of Bac Ai pumped storage hydropower project include six closely stacked stratigraphic formations by Horizons method, whereas modeling of engineering geological parameters is performed by geostatistical methods. The accuracy and reliability assessments are tested through error statistics, empirical evaluation, and expert methods. The three-dimensional model analysis allows better visualization of volumetric calculations, excavation and backfilling of the lake area, tunneling of power pipelines, and calculation of on-site construction material reserves. In general, the application of engineering geological modeling makes the design work more intuitive and comprehensive, helping construction designers better identify and offer the most optimal design solutions for the project. The database always ensures the update and synchronization, as well as enables 3D modeling of geological and topographic data to integrate with the designed data according to the building information modeling. This is also the base platform for BIM & GIS integration.Keywords: database, engineering geology, 3D Model, RockWorks, Bac Ai pumped storage hydropower project
Procedia PDF Downloads 16813083 Drone On-Time Obstacle Avoidance for Static and Dynamic Obstacles
Authors: Herath M. P. C. Jayaweera, Samer Hanoun
Abstract:
Path planning for on-time obstacle avoidance is an essential and challenging task that enables drones to achieve safe operation in any application domain. The level of challenge increases significantly on the obstacle avoidance technique when the drone is following a ground mobile entity (GME). This is mainly due to the change in direction and magnitude of the GME′s velocity in dynamic and unstructured environments. Force field techniques are the most widely used obstacle avoidance methods due to their simplicity, ease of use, and potential to be adopted for three-dimensional dynamic environments. However, the existing force field obstacle avoidance techniques suffer many drawbacks, including their tendency to generate longer routes when the obstacles are sideways of the drone′s route, poor ability to find the shortest flyable path, propensity to fall into local minima, producing a non-smooth path, and high failure rate in the presence of symmetrical obstacles. To overcome these shortcomings, this paper proposes an on-time three-dimensional obstacle avoidance method for drones to effectively and efficiently avoid dynamic and static obstacles in unknown environments while pursuing a GME. This on-time obstacle avoidance technique generates velocity waypoints for its obstacle-free and efficient path based on the shape of the encountered obstacles. This method can be utilized on most types of drones that have basic distance measurement sensors and autopilot-supported flight controllers. The proposed obstacle avoidance technique is validated and evaluated against existing force field methods for different simulation scenarios in Gazebo and ROS-supported PX4-SITL. The simulation results show that the proposed obstacle avoidance technique outperforms the existing force field techniques and is better suited for real-world applications.Keywords: drones, force field methods, obstacle avoidance, path planning
Procedia PDF Downloads 9313082 Implementation Research on the Singapore Physical Activity and Nutrition Program: A Mixed-Method Evaluation
Authors: Elaine Wong
Abstract:
Introduction: The Singapore Physical Activity and Nutrition Study (SPANS) aimed to assess the effects of a community-based intervention on physical activity (PA) and nutrition behaviours as well as chronic disease risk factors for Singaporean women aged above 50 years. This article examines the participation, dose, fidelity, reach, satisfaction and reasons for completion and non-completion of the SPANS. Methods: The SPANS program integrated constructs of Social Cognitive Theory (SCT) and is composed of PA activities; nutrition workshops; dietary counselling coupled with motivational interviewing (MI) through phone calls; and text messages promoting healthy behaviours. Printed educational resources and health incentives were provided to participants. Data were collected via a mixed-method design strategy from a sample of 295 intervention participants. Quantitative data were collected using self-completed survey (n = 209); qualitative data were collected via research assistants’ notes, post feedback sessions and exit interviews with program completers (n = 13) and non-completers (n = 12). Results: Majority of participants reported high ‘satisfactory to excellent’ ratings for the program pace, suitability of interest and overall program (96.2-99.5%). Likewise, similar ratings for clarity of presentation; presentation skills, approachability, knowledge; and overall rating of trainers and program ambassadors were achieved (98.6-100%). Phone dietary counselling had the highest level of participation (72%) at less than or equal to 75% attendance rate followed by nutrition workshops (65%) and PA classes (60%). Attrition rate of the program was 19%; major reasons for withdrawal were personal commitments, relocation and health issues. All participants found the program resources to be colourful, informative and practical for their own reference. Reasons for program completion and maintenance were: desired health benefits; social bonding opportunities and to learn more about PA and nutrition. Conclusions: Process evaluation serves as an appropriate tool to identify recruitment challenges, effective intervention strategies and to ensure program fidelity. Program participants were satisfied with the educational resources, program components and delivery strategies implemented by the trainers and program ambassadors. The combination of printed materials and intervention components, when guided by the SCT and MI, were supportive in encouraging and reinforcing lifestyle behavioural changes. Mixed method evaluation approaches are integral processes to pinpoint barriers, motivators, improvements and effective program components in optimising the health status of Singaporean women.Keywords: process evaluation, Singapore, older adults, lifestyle changes, program challenges
Procedia PDF Downloads 12213081 Measuring Biobased Content of Building Materials Using Carbon-14 Testing
Authors: Haley Gershon
Abstract:
The transition from using fossil fuel-based building material to formulating eco-friendly and biobased building materials plays a key role in sustainable building. The growing demand on a global level for biobased materials in the building and construction industries heightens the importance of carbon-14 testing, an analytical method used to determine the percentage of biobased content that comprises a material’s ingredients. This presentation will focus on the use of carbon-14 analysis within the building materials sector. Carbon-14, also known as radiocarbon, is a weakly radioactive isotope present in all living organisms. Any fossil material older than 50,000 years will not contain any carbon-14 content. The radiocarbon method is thus used to determine the amount of carbon-14 content present in a given sample. Carbon-14 testing is performed according to ASTM D6866, a standard test method developed specifically for biobased content determination of material in solid, liquid, or gaseous form, which requires radiocarbon dating. Samples are combusted and converted into a solid graphite form and then pressed onto a metal disc and mounted onto a wheel of an accelerator mass spectrometer (AMS) machine for the analysis. The AMS instrument is used in order to count the amount of carbon-14 present. By submitting samples for carbon-14 analysis, manufacturers of building materials can confirm the biobased content of ingredients used. Biobased testing through carbon-14 analysis reports results as percent biobased content, indicating the percentage of ingredients coming from biomass sourced carbon versus fossil carbon. The analysis is performed according to standardized methods such as ASTM D6866, ISO 16620, and EN 16640. Products 100% sourced from plants, animals, or microbiological material are therefore 100% biobased, while products sourced only from fossil fuel material are 0% biobased. Any result in between 0% and 100% biobased indicates that there is a mixture of both biomass-derived and fossil fuel-derived sources. Furthermore, biobased testing for building materials allows manufacturers to submit eligible material for certification and eco-label programs such as the United States Department of Agriculture (USDA) BioPreferred Program. This program includes a voluntary labeling initiative for biobased products, in which companies may apply to receive and display the USDA Certified Biobased Product label, stating third-party verification and displaying a product’s percentage of biobased content. The USDA program includes a specific category for Building Materials. In order to qualify for the biobased certification under this product category, examples of product criteria that must be met include minimum 62% biobased content for wall coverings, minimum 25% biobased content for lumber, and a minimum 91% biobased content for floor coverings (non-carpet). As a result, consumers can easily identify plant-based products in the marketplace.Keywords: carbon-14 testing, biobased, biobased content, radiocarbon dating, accelerator mass spectrometry, AMS, materials
Procedia PDF Downloads 15813080 Exploring Women's Needs Referring to Health Care Centers for Doing Pap Smear Test
Authors: Arezoo Fallahi, Fateme Aslibigi, Parvaneh Taymoori, Babak Nematshahrbabaki
Abstract:
Background and Aims: Cancer of the cervix, one of cancer-related death, is the second most common cancer in women worldwide. It develops over time but it is one of the most preventable types of cancer and there is the available proper screening program for its preventing. Since Pap smear test is vital to prevent and control of disease but women do not accomplish it regularly. Therefore, this study was aimed to explore women's needs referring to health care centers for doing Pap smear test. Material and methods: In this study, an inductive qualitative method with content analysis approach was used. This survey was done in varamin city (is located capital of Iran) in year 2014. Through the purposive sampling 15 women's view of point referring to health care centers of for doing Pap smear test was surveyed. Inclusion criteria were: 20-50 years old married women, having experience Pap smear test and attendance to participate in the Study. Recorded semi- structured interviews were typed and analyzed through of content analysis method. To obtain trustworthiness and rigor of the data, the criteria of credibility, dependability, confirmability and transferability was used. Results: During the data analysis, four main categories of “role of health care team”, “role of organizations”, “social support” and “policies and administration system” were developed. The participants emphasized on making motivational rules and coordination among organizations to do behaviors related to women health. Conclusion: The findings of study showed that doing Pap smear test are attributed to appropriate and intimate interactions with health professionals, family support, encouraging legislation and policies and coordination and notification of organizations. Therefore, designers and stockholders of policies and health system should more consider to growth and involve other organizations toward women's health.Keywords: qualitative approach, pap smear test, women, health care centers
Procedia PDF Downloads 49613079 Quantitative Wide-Field Swept-Source Optical Coherence Tomography Angiography and Visual Outcomes in Retinal Artery Occlusion
Authors: Yifan Lu, Ying Cui, Ying Zhu, Edward S. Lu, Rebecca Zeng, Rohan Bajaj, Raviv Katz, Rongrong Le, Jay C. Wang, John B. Miller
Abstract:
Purpose: Retinal artery occlusion (RAO) is an ophthalmic emergency that can lead to poor visual outcome and is associated with an increased risk of cerebral stroke and cardiovascular events. Fluorescein angiography (FA) is the traditional diagnostic tool for RAO; however, wide-field swept-source optical coherence tomography angiography (WF SS-OCTA), as a nascent imaging technology, is able to provide quick and non-invasive angiographic information with a wide field of view. In this study, we looked for associations between OCT-A vascular metrics and visual acuity in patients with prior diagnosis of RAO. Methods: Patients with diagnoses of central retinal artery occlusion (CRAO) or branched retinal artery occlusion (BRAO) were included. A 6mm x 6mm Angio and a 15mm x 15mm AngioPlex Montage OCT-A image were obtained for both eyes in each patient using the Zeiss Plex Elite 9000 WF SS-OCTA device. Each 6mm x 6mm image was divided into nine Early Treatment Diabetic Retinopathy Study (ETDRS) subfields. The average measurement of the central foveal subfield, inner ring, and outer ring was calculated for each parameter. Non-perfusion area (NPA) was manually measured using 15mm x 15mm Montage images. A linear regression model was utilized to identify a correlation between the imaging metrics and visual acuity. A P-value less than 0.05 was considered to be statistically significant. Results: Twenty-five subjects were included in the study. For RAO eyes, there was a statistically significant negative correlation between vision and retinal thickness as well as superficial capillary plexus vessel density (SCP VD). A negative correlation was found between vision and deep capillary plexus vessel density (DCP VD) without statistical significance. There was a positive correlation between vision and choroidal thickness as well as choroidal volume without statistical significance. No statistically significant correlation was found between vision and the above metrics in contralateral eyes. For NPA measurements, no significant correlation was found between vision and NPA. Conclusions: This is the first study to our best knowledge to investigate the utility of WF SS-OCTA in RAO and to demonstrate correlations between various retinal vascular imaging metrics and visual outcomes. Further investigations should explore the associations between these imaging findings and cardiovascular risk as RAO patients are at elevated risk for symptomatic stroke. The results of this study provide a basis to understand the structural changes involved in visual outcomes in RAO. Furthermore, they may help guide management of RAO and prevention of cerebral stroke and cardiovascular accidents in patients with RAO.Keywords: OCTA, swept-source OCT, retinal artery occlusion, Zeiss Plex Elite
Procedia PDF Downloads 13913078 Mapping Iron Content in the Brain with Magnetic Resonance Imaging and Machine Learning
Authors: Gabrielle Robertson, Matthew Downs, Joseph Dagher
Abstract:
Iron deposition in the brain has been linked with a host of neurological disorders such as Alzheimer’s, Parkinson’s, and Multiple Sclerosis. While some treatment options exist, there are no objective measurement tools that allow for the monitoring of iron levels in the brain in vivo. An emerging Magnetic Resonance Imaging (MRI) method has been recently proposed to deduce iron concentration through quantitative measurement of magnetic susceptibility. This is a multi-step process that involves repeated modeling of physical processes via approximate numerical solutions. For example, the last two steps of this Quantitative Susceptibility Mapping (QSM) method involve I) mapping magnetic field into magnetic susceptibility and II) mapping magnetic susceptibility into iron concentration. Process I involves solving an ill-posed inverse problem by using regularization via injection of prior belief. The end result from Process II highly depends on the model used to describe the molecular content of each voxel (type of iron, water fraction, etc.) Due to these factors, the accuracy and repeatability of QSM have been an active area of research in the MRI and medical imaging community. This work aims to estimate iron concentration in the brain via a single step. A synthetic numerical model of the human head was created by automatically and manually segmenting the human head on a high-resolution grid (640x640x640, 0.4mm³) yielding detailed structures such as microvasculature and subcortical regions as well as bone, soft tissue, Cerebral Spinal Fluid, sinuses, arteries, and eyes. Each segmented region was then assigned tissue properties such as relaxation rates, proton density, electromagnetic tissue properties and iron concentration. These tissue property values were randomly selected from a Probability Distribution Function derived from a thorough literature review. In addition to having unique tissue property values, different synthetic head realizations also possess unique structural geometry created by morphing the boundary regions of different areas within normal physical constraints. This model of the human brain is then used to create synthetic MRI measurements. This is repeated thousands of times, for different head shapes, volume, tissue properties and noise realizations. Collectively, this constitutes a training-set that is similar to in vivo data, but larger than datasets available from clinical measurements. This 3D convolutional U-Net neural network architecture was used to train data-driven Deep Learning models to solve for iron concentrations from raw MRI measurements. The performance was then tested on both synthetic data not used in training as well as real in vivo data. Results showed that the model trained on synthetic MRI measurements is able to directly learn iron concentrations in areas of interest more effectively than other existing QSM reconstruction methods. For comparison, models trained on random geometric shapes (as proposed in the Deep QSM method) are less effective than models trained on realistic synthetic head models. Such an accurate method for the quantitative measurement of iron deposits in the brain would be of important value in clinical studies aiming to understand the role of iron in neurological disease.Keywords: magnetic resonance imaging, MRI, iron deposition, machine learning, quantitative susceptibility mapping
Procedia PDF Downloads 13713077 A 0-1 Goal Programming Approach to Optimize the Layout of Hospital Units: A Case Study in an Emergency Department in Seoul
Authors: Farhood Rismanchian, Seong Hyeon Park, Young Hoon Lee
Abstract:
This paper proposes a method to optimize the layout of an emergency department (ED) based on real executions of care processes by considering several planning objectives simultaneously. Recently, demand for healthcare services has been dramatically increased. As the demand for healthcare services increases, so do the need for new healthcare buildings as well as the need for redesign and renovating existing ones. The importance of implementation of a standard set of engineering facilities planning and design techniques has been already proved in both manufacturing and service industry with many significant functional efficiencies. However, high complexity of care processes remains a major challenge to apply these methods in healthcare environments. Process mining techniques applied in this study to tackle the problem of complexity and to enhance care process analysis. Process related information such as clinical pathways extracted from the information system of an ED. A 0-1 goal programming approach is then proposed to find a single layout that simultaneously satisfies several goals. The proposed model solved by optimization software CPLEX 12. The solution reached using the proposed method has 42.2% improvement in terms of walking distance of normal patients and 47.6% improvement in walking distance of critical patients at minimum cost of relocation. It has been observed that lots of patients must unnecessarily walk long distances during their visit to the emergency department because of an inefficient design. A carefully designed layout can significantly decrease patient walking distance and related complications.Keywords: healthcare operation management, goal programming, facility layout problem, process mining, clinical processes
Procedia PDF Downloads 29513076 Estimation of Microbial-N Supply to Small Intestine in Angora Goats Fed by Different Roughage Sources
Authors: Nurcan Cetinkaya
Abstract:
The aim of the study was to estimate the microbial-N flow to small intestine based on daily urinary purine derivatives(PD) mainly xanthine, hypoxanthine, uric acid and allantoin excretion in Angora goats fed by grass hay and concentrate (Period I); barley straw and concentrate (Period II). Daily urine samples were collected during last 3 days of each period from 10 individually penned Angora bucks( LW 30-35 Kg, 2-3 years old) receiving ad libitum grass hay or barley straw and 300 g/d concentrate. Fresh water was always available. 4N H2SO4 was added to collected daily urine .samples to keep pH under 3 to avoid of uric acid precipitation. Diluted urine samples were stored at -20°C until analysis. Urine samples were analyzed for xanthine, hypoxanthine, uric acid, allantoin and creatinine by High-Performance Liquid Chromatographic Method (HPLC). Urine was diluted 1:15 in ratio with water and duplicate samples were prepared for HPLC analysis. Calculated mean levels (n=60) for urinary xanthine, hypoxanthine, uric acid, allantoin, total PD and creatinine excretion were 0.39±0.02 , 0.26±0.03, 0.59±0.06, 5.91±0.50, 7.15±0.57 and 3.75±0.40 mmol/L for Period I respectively; 0.35±0.03, 0.21±0.02, 0.55±0.05, 5.60±0.47, 6.71±0.46 and 3.73±0.41 mmol/L for Period II respectively.Mean values of Period I and II were significantly different (P< 0.05) except creatinine excretion. Estimated mean microbial-N supply to the small intestine for Period I and II in Angora goats were 5.72±0.46 and 5.41±0.61 g N/d respectively. The effects of grass hay and barley straw feeding on microbial-N supply to small intestine were found significantly different (P< 0.05). In conclusion, grass hay showed a better effect on the ruminal microbial protein synthesis compared to barley straw, therefore; grass hay is suggested as roughage source in Angora goat feeding.Keywords: angora goat, HPLC method, microbial-N supply to small intestine, urinary purine derivatives
Procedia PDF Downloads 223