Search results for: data comparison
27207 Data Management System for Environmental Remediation
Authors: Elizaveta Petelina, Anton Sizo
Abstract:
Environmental remediation projects deal with a wide spectrum of data, including data collected during site assessment, execution of remediation activities, and environmental monitoring. Therefore, an appropriate data management is required as a key factor for well-grounded decision making. The Environmental Data Management System (EDMS) was developed to address all necessary data management aspects, including efficient data handling and data interoperability, access to historical and current data, spatial and temporal analysis, 2D and 3D data visualization, mapping, and data sharing. The system focuses on support of well-grounded decision making in relation to required mitigation measures and assessment of remediation success. The EDMS is a combination of enterprise and desktop level data management and Geographic Information System (GIS) tools assembled to assist to environmental remediation, project planning, and evaluation, and environmental monitoring of mine sites. EDMS consists of seven main components: a Geodatabase that contains spatial database to store and query spatially distributed data; a GIS and Web GIS component that combines desktop and server-based GIS solutions; a Field Data Collection component that contains tools for field work; a Quality Assurance (QA)/Quality Control (QC) component that combines operational procedures for QA and measures for QC; Data Import and Export component that includes tools and templates to support project data flow; a Lab Data component that provides connection between EDMS and laboratory information management systems; and a Reporting component that includes server-based services for real-time report generation. The EDMS has been successfully implemented for the Project CLEANS (Clean-up of Abandoned Northern Mines). Project CLEANS is a multi-year, multimillion-dollar project aimed at assessing and reclaiming 37 uranium mine sites in northern Saskatchewan, Canada. The EDMS has effectively facilitated integrated decision-making for CLEANS project managers and transparency amongst stakeholders.Keywords: data management, environmental remediation, geographic information system, GIS, decision making
Procedia PDF Downloads 16127206 Essential Oils of Polygonum L. Plants Growing in Kazakhstan and Their Antibacterial and Antifungal Activity
Authors: Dmitry Yu. Korulkin, Raissa A. Muzychkina
Abstract:
Bioactive substances of plant origin can be one of the advanced means of solution to the issue of combined therapy to inflammation. The main advantages of medical plants are softness and width of their therapeutic effect on an organism, the absence of side effects and complications even if the used continuously, high tolerability by patients. Moreover, medial plants are often the only and (or) cost-effective sources of natural biologically active substances and medicines. Along with other biologically active groups of chemical compounds, essential oils with wide range of pharmacological effects became very ingrained in medical practice. Essential oil was obtained by the method hydrodistillation air-dry aerial part of Polygonum L. plants using Clevenger apparatus. Qualitative composition of essential oils was analyzed by chromatography-mass-spectrometry method using Agilent 6890N apparatus. The qualitative analysis is based on the comparison of retention time and full mass-spectra with respective data on components of reference oils and pure compounds, if there were any, and with the data of libraries of mass-spectra Wiley 7th edition and NIST 02. The main components of essential oil are for: Polygonum amphibium L. - γ-terpinene, borneol, piperitol, 1,8-cyneole, α-pinene, linalool, terpinolene and sabinene; Polygonum minus Huds. Fl. Angl. – linalool, terpinolene, camphene, borneol, 1,8-cyneole, α-pinene, 4-terpineol and 1-octen-3-ol; Polygonum alpinum All. – camphene, sabinene, 1-octen-3-ol, 4-carene, p- and o-cymol, γ-terpinene, borneol, -terpineol; Polygonum persicaria L. - α-pinene, sabinene, -terpinene, 4-carene, 1,8-cyneole, borneol, 4-terpineol. Antibacterial activity was researched relating to strains of gram-positive bacteria Staphylococcus aureus, Bacillus subtilis, Streptococcus agalacticae, relating to gram-negative strain Escherichia coli and to yeast fungus Сandida albicans using agar diffusion method. The medicines of comparison were gentamicin for bacteria and nystatin for yeast fungus Сandida albicans. It has been shown that Polygonum L. essential oils has moderate antibacterial effect to gram-positive microorganisms and weak antifungal activity to Candida albicans yeast fungus. At the second stage of our researches wound healing properties of ointment form of 3% essential oil was researched on the model of flat dermal wounds. To assess the influence of essential oil on healing processes the model of flat dermal wound. The speed of wound healing on rats of different groups was judged based on assessment the area of a wound from time to time. During research of wound healing properties disturbance of integral in neither group: general condition and behavior of animals, food intake, and excretion. Wound healing action of 3% ointment on base of Polygonum L. essential oil and polyethyleneglycol is comparable with the action of reference substances. As more favorable healing dynamics was observed in the experimental group than in control group, the tested ointment can be deemed more promising for further detailed study as wound healing means.Keywords: antibacterial, antifungal, bioactive substances, essential oils, isolation, Polygonum L.
Procedia PDF Downloads 53327205 Leadership and Corporate Social Responsibility: The Role of Spiritual Intelligence
Authors: Meghan E. Murray, Carri R. Tolmie
Abstract:
This study aims to identify potential factors and widely applicable best practices that can contribute to improving corporate social responsibility (CSR) and corporate performance for firms by exploring the relationship between transformational leadership, spiritual intelligence, and emotional intelligence. Corporate social responsibility is when companies are cognizant of the impact of their actions on the economy, their communities, the environment, and the world as a whole while executing business practices accordingly. The prevalence of CSR has continuously strengthened over the past few years and is now a common practice in the business world, with such efforts coinciding with what stakeholders and the public now expect from corporations. Because of this, it is extremely important to be able to pinpoint factors and best practices that can improve CSR within corporations. One potential factor that may lead to improved CSR is spiritual intelligence (SQ), or the ability to recognize and live with a purpose larger than oneself. Spiritual intelligence is a measurable skill, just like emotional intelligence (EQ), and can be improved through purposeful and targeted coaching. This research project consists of two studies. Study 1 is a case study comparison of a benefit corporation and a non-benefit corporation. This study will examine the role of SQ and EQ as moderators in the relationship between the transformational leadership of employees within each company and the perception of each firm’s CSR and corporate performance. Project methodology includes creating and administering a survey comprised of multiple pre-established scales on transformational leadership, spiritual intelligence, emotional intelligence, CSR, and corporate performance. Multiple regression analysis will be used to extract significant findings from the collected data. Study 2 will dive deeper into spiritual intelligence itself by analyzing pre-existing data and identifying key relationships that may provide value to companies and their stakeholders. This will be done by performing multiple regression analysis on anonymized data provided by Deep Change, a company that has created an advanced, proprietary system to measure spiritual intelligence. Based on the results of both studies, this research aims to uncover best practices, including the unique contribution of spiritual intelligence, that can be utilized by organizations to help enhance their corporate social responsibility. If it is found that high spiritual and emotional intelligence can positively impact CSR effort, then corporations will have a tangible way to enhance their CSR: providing targeted employees with training and coaching to increase their SQ and EQ.Keywords: corporate social responsibility, CSR, corporate performance, emotional intelligence, EQ, spiritual intelligence, SQ, transformational leadership
Procedia PDF Downloads 12727204 Spatio-Temporal Dynamics of Snow Cover and Melt/Freeze Conditions in Indian Himalayas
Authors: Rajashree Bothale, Venkateswara Rao
Abstract:
Indian Himalayas also known as third pole with 0.9 Million SQ km area, contain the largest reserve of ice and snow outside poles and affect global climate and water availability in the perennial rivers. The variations in the extent of snow are indicative of climate change. The snow melt is sensitive to climate change (warming) and also an influencing factor to the climate change. A study of the spatio-temporal dynamics of snow cover and melt/freeze conditions is carried out using space based observations in visible and microwave bands. An analysis period of 2003 to 2015 is selected to identify and map the changes and trend in snow cover using Indian Remote Sensing (IRS) Advanced Wide Field Sensor (AWiFS) and Moderate Resolution Imaging Spectroradiometer(MODIS) data. For mapping of wet snow, microwave data is used, which is sensitive to the presence of liquid water in the snow. The present study uses Ku-band scatterometer data from QuikSCAT and Oceansat satellites. The enhanced resolution images at 2.25 km from the 13.6GHz sensor are used to analyze the backscatter response to dry and wet snow for the period of 2000-2013 using threshold method. The study area is divided into three major river basins namely Brahmaputra, Ganges and Indus which also represent the diversification in Himalayas as the Eastern Himalayas, Central Himalayas and Western Himalayas. Topographic variations across different zones show that a majority of the study area lies in 4000–5500 m elevation range and the maximum percent of high elevated areas (>5500 m) lies in Western Himalayas. The effect of climate change could be seen in the extent of snow cover and also on the melt/freeze status in different parts of Himalayas. Melt onset day increases from east (March11+11) to west (May12+15) with large variation in number of melt days. Western Himalayas has shorter melt duration (120+15) in comparison to Eastern Himalayas (150+16) providing lesser time for melt. Eastern Himalaya glaciers are prone for enhanced melt due to large melt duration. The extent of snow cover coupled with the status of melt/freeze indicating solar radiation can be used as precursor for monsoon prediction.Keywords: Indian Himalaya, Scatterometer, Snow Melt/Freeze, AWiFS, Cryosphere
Procedia PDF Downloads 26027203 Comparative Analysis of Islamic Bank in Indonesia and Malaysia with Risk Profile, Good Corporate Governance, Earnings, and Capital Method: Performance of Business Function and Social Function Perspective
Authors: Achsania Hendratmi, Nisful Laila, Fatin Fadhilah Hasib, Puji Sucia Sukmaningrum
Abstract:
This study aims to compare and see the differences between Islamic bank in Indonesia and Islamic bank in Malaysia using RGEC method (Risk Profile, Good Corporate Governance, Earnings, and Capital). This study examines the comparison in business and social performance of eleven Islamic banks in Indonesia and fifteen Islamic banks in Malaysia. This research used quantitative approach and the collections of data was done by collecting all the annual reports of banks that has been created as a sample over the period 2011-2015. The test result of the Independent Samples T-test and Mann-Whitney Test showed there were differences in the business performance of Islamic Bank in Indonesia and Malaysia as seen from the aspect of Risk profile (FDR), GCG, and Earnings (ROA). Also, there were differences of business and social performance as seen from Earnings (ROE), Capital (CAR), and Sharia Conformity Indicator (PSR and ZR) aspects.Keywords: business performance, Islamic banks, RGEC, social performance
Procedia PDF Downloads 29427202 Synthesis and Magnetic Properties of Six-Lines Ferrihydrite Nanoparticles
Authors: Chandni Rani, S. D. Tiwari
Abstract:
Ferrihydrite is one of the distinct minerals in the family of oxides, hydroxides and oxyhydroxides of iron. It is a nanocrystalline material. It occurs naturally in different sediments, soil systems and also found in the core of ferritin, an iron storage protien. This material can also be synthesized by suitable chemical methods in laboratories. This is known as less crystalline Iron (III) Oxyhydroxide. Due to its poor crystallinity, there are very broad peaks in x-ray diffraction. Depending on the number of peaks in x-ray diffraction pattern, it is classified as two lines and six lines ferrihydrite. The average crystallite size for these two forms is found to be about 2nm to 5nm. The exact crystal structure of this system is still under debate. Out of these two forms, the six lines ferrihydrite is more ordered in comparison to two lines ferrihydrite. The magnetic behavior of two lines ferrihydrite nanoparticles is somewhat well studied. But the magnetic behavior of six lines ferrihydrite nanoparticles could not attract the attention of researchers much. This motivated us to work on the magnetic properties of six lines ferrihydrite nanoparticles. In this work, we present synthesis, structural characterization and magnetic behavior of 5 nm six lines ferrihydrite nanoparticles. X-ray diffraction and transmission electron microscope are used for structural characterization of this system. Magnetization measurements are performed to fit the data at different temperatures. Then the effect of magnetic moment distribution is also found. All these observations are discussed in detail.Keywords: nanoparticles, magnetism, superparamagnetism, magnetic anisotropy
Procedia PDF Downloads 33927201 Fighting for What’s Fair: Illegitimacy Appraisals as Drivers of Different Collective Action Responses to Economic Inequality
Authors: Finn Lannon, Jenny Roth, Roland Deutsch, Eric Igou
Abstract:
The world continues to be rife with economic inequality, which has an impact on how people think and behaves in response to large and often growing gaps in wealth. Large gaps in earnings between groups within a particular organization, area or society can create tension between groups. Collective action tendencies (to protest, sign a petition, vote on behalf of an ingroup etc.) are also a growing phenomenon globally. Research shows that economic inequality promotes social processes such as appraisals of illegitimacy, which are recognized antecedents of collective action. This paper examines different types of collective action intentions among middle-status group members in response to economic inequality in two studies. Study 1 (N = 72) demonstrates a causal link between high economic inequality and collective action intentions of middle-status group members both to reduce inequality and to improve group status. A second pre-registered study (N = 432) examines key drivers of these relationships, including illegitimacy appraisals and direction of intergroup comparison. Adding to the current understanding of the topic, distinctions between the illegitimacy of one’s group status and the illegitimacy of societal inequality are found to mediate key relationships between economic inequality and relevant collective action types. The direction of intergroup comparison (upwards vs. downwards) is also shown to have a significant impact on collective action intentions to improve group status. Findings add to the understanding of the consequences of economic inequality and drivers of collective action intentions.Keywords: economic inequality, collective action, legitimacy, social psychology
Procedia PDF Downloads 9127200 Quantifying Automation in the Architectural Design Process via a Framework Based on Task Breakdown Systems and Recursive Analysis: An Exploratory Study
Authors: D. M. Samartsev, A. G. Copping
Abstract:
As with all industries, architects are using increasing amounts of automation within practice, with approaches such as generative design and use of AI becoming more commonplace. However, the discourse on the rate at which the architectural design process is being automated is often personal and lacking in objective figures and measurements. This results in confusion between people and barriers to effective discourse on the subject, in turn limiting the ability of architects, policy makers, and members of the public in making informed decisions in the area of design automation. This paper proposes the use of a framework to quantify the progress of automation within the design process. The use of a reductionist analysis of the design process allows it to be quantified in a manner that enables direct comparison across different times, as well as locations and projects. The methodology is informed by the design of this framework – taking on the aspects of a systematic review but compressed in time to allow for an initial set of data to verify the validity of the framework. The use of such a framework of quantification enables various practical uses such as predicting the future of the architectural industry with regards to which tasks will be automated, as well as making more informed decisions on the subject of automation on multiple levels ranging from individual decisions to policy making from governing bodies such as the RIBA. This is achieved by analyzing the design process as a generic task that needs to be performed, then using principles of work breakdown systems to split the task of designing an entire building into smaller tasks, which can then be recursively split further as required. Each task is then assigned a series of milestones that allow for the objective analysis of its automation progress. By combining these two approaches it is possible to create a data structure that describes how much various parts of the architectural design process are automated. The data gathered in the paper serves the dual purposes of providing the framework with validation, as well as giving insights into the current situation of automation within the architectural design process. The framework can be interrogated in many ways and preliminary analysis shows that almost 40% of the architectural design process has been automated in some practical fashion at the time of writing, with the rate at which progress is made slowly increasing over the years, with the majority of tasks in the design process reaching a new milestone in automation in less than 6 years. Additionally, a further 15% of the design process is currently being automated in some way, with various products in development but not yet released to the industry. Lastly, various limitations of the framework are examined in this paper as well as further areas of study.Keywords: analysis, architecture, automation, design process, technology
Procedia PDF Downloads 10427199 Computational Approaches to Study Lineage Plasticity in Human Pancreatic Ductal Adenocarcinoma
Authors: Almudena Espin Perez, Tyler Risom, Carl Pelz, Isabel English, Robert M. Angelo, Rosalie Sears, Andrew J. Gentles
Abstract:
Pancreatic ductal adenocarcinoma (PDAC) is one of the most deadly malignancies. The role of the tumor microenvironment (TME) is gaining significant attention in cancer research. Despite ongoing efforts, the nature of the interactions between tumors, immune cells, and stromal cells remains poorly understood. The cell-intrinsic properties that govern cell lineage plasticity in PDAC and extrinsic influences of immune populations require technically challenging approaches due to the inherently heterogeneous nature of PDAC. Understanding the cell lineage plasticity of PDAC will improve the development of novel strategies that could be translated to the clinic. Members of the team have demonstrated that the acquisition of ductal to neuroendocrine lineage plasticity in PDAC confers therapeutic resistance and is a biomarker of poor outcomes in patients. Our approach combines computational methods for deconvolving bulk transcriptomic cancer data using CIBERSORTx and high-throughput single-cell imaging using Multiplexed Ion Beam Imaging (MIBI) to study lineage plasticity in PDAC and its relationship to the infiltrating immune system. The CIBERSORTx algorithm uses signature matrices from immune cells and stroma from sorted and single-cell data in order to 1) infer the fractions of different immune cell types and stromal cells in bulked gene expression data and 2) impute a representative transcriptome profile for each cell type. We studied a unique set of 300 genomically well-characterized primary PDAC samples with rich clinical annotation. We deconvolved the PDAC transcriptome profiles using CIBERSORTx, leveraging publicly available single-cell RNA-seq data from normal pancreatic tissue and PDAC to estimate cell type proportions in PDAC, and digitally reconstruct cell-specific transcriptional profiles from our study dataset. We built signature matrices and optimized by simulations and comparison to ground truth data. We identified cell-type-specific transcriptional programs that contribute to cancer cell lineage plasticity, especially in the ductal compartment. We also studied cell differentiation hierarchies using CytoTRACE and predict cell lineage trajectories for acinar and ductal cells that we believe are pinpointing relevant information on PDAC progression. Collaborators (Angelo lab, Stanford University) has led the development of the Multiplexed Ion Beam Imaging (MIBI) platform for spatial proteomics. We will use in the very near future MIBI from tissue microarray of 40 PDAC samples to understand the spatial relationship between cancer cell lineage plasticity and stromal cells focused on infiltrating immune cells, using the relevant markers of PDAC plasticity identified from the RNA-seq analysis.Keywords: deconvolution, imaging, microenvironment, PDAC
Procedia PDF Downloads 12827198 Analysis of Cooperative Learning Behavior Based on the Data of Students' Movement
Authors: Wang Lin, Li Zhiqiang
Abstract:
The purpose of this paper is to analyze the cooperative learning behavior pattern based on the data of students' movement. The study firstly reviewed the cooperative learning theory and its research status, and briefly introduced the k-means clustering algorithm. Then, it used clustering algorithm and mathematical statistics theory to analyze the activity rhythm of individual student and groups in different functional areas, according to the movement data provided by 10 first-year graduate students. It also focused on the analysis of students' behavior in the learning area and explored the law of cooperative learning behavior. The research result showed that the cooperative learning behavior analysis method based on movement data proposed in this paper is feasible. From the results of data analysis, the characteristics of behavior of students and their cooperative learning behavior patterns could be found.Keywords: behavior pattern, cooperative learning, data analyze, k-means clustering algorithm
Procedia PDF Downloads 18727197 Host Responses in Peri-Implant Tissue in Comparison to Periodontal Tissue
Authors: Raviporn Madarasmi, Anjalee Vacharaksa, Pravej Serichetaphongse
Abstract:
The host response in peri-implant tissue may differ from that in periodontal tissue in a healthy individual. The purpose of this study is to investigate the expression of inflammatory cytokines in peri-implant crevicular fluid (PICF) from single implant with different abutment types in comparison to healthy periodontal tissue. 19 participants with healthy implants and teeth were recruited according to inclusion and exclusion criteria. PICF and gingival crevicular fluid (GCF) was collected using sterile paper points. The expression level of inflammatory cytokines including IL-1α, IL-1β, TNF-α, IFN-γ, IL-6, and IL-8 was assessed using enzyme-linked immunosorbent assay (ELISA). Paired t test was used to compare the expression levels of inflammatory cytokines around natural teeth and peri-implant in PICF and GCF of the same individual. The Independent t-test was used to compare the expression levels of inflammatory cytokines in PICF from titanium and UCLA abutment. Expression of IL-6, TNF-α, and IFN-γ in PICF was not statistically different from GCF among titanium and UCLA abutment group. However, the level of IL-1α in the PICF from the implants with UCLA abutment was significantly higher than GCF (P=0.030). In addition, the level of IL-1β in PICF from the implants with titanium abutment was significantly higher than GCF (P=0.032). When different abutment types was compared, IL-8 expression in PICF from implants with UCLA abutment was significantly higher than titanium abutment (P=0.003).Keywords: abutment, dental implant, gingival crevicular fluid and peri-implant crevicular fluid
Procedia PDF Downloads 18527196 Combining Diffusion Maps and Diffusion Models for Enhanced Data Analysis
Authors: Meng Su
Abstract:
High-dimensional data analysis often presents challenges in capturing the complex, nonlinear relationships and manifold structures inherent to the data. This article presents a novel approach that leverages the strengths of two powerful techniques, Diffusion Maps and Diffusion Probabilistic Models (DPMs), to address these challenges. By integrating the dimensionality reduction capability of Diffusion Maps with the data modeling ability of DPMs, the proposed method aims to provide a comprehensive solution for analyzing and generating high-dimensional data. The Diffusion Map technique preserves the nonlinear relationships and manifold structure of the data by mapping it to a lower-dimensional space using the eigenvectors of the graph Laplacian matrix. Meanwhile, DPMs capture the dependencies within the data, enabling effective modeling and generation of new data points in the low-dimensional space. The generated data points can then be mapped back to the original high-dimensional space, ensuring consistency with the underlying manifold structure. Through a detailed example implementation, the article demonstrates the potential of the proposed hybrid approach to achieve more accurate and effective modeling and generation of complex, high-dimensional data. Furthermore, it discusses possible applications in various domains, such as image synthesis, time-series forecasting, and anomaly detection, and outlines future research directions for enhancing the scalability, performance, and integration with other machine learning techniques. By combining the strengths of Diffusion Maps and DPMs, this work paves the way for more advanced and robust data analysis methods.Keywords: diffusion maps, diffusion probabilistic models (DPMs), manifold learning, high-dimensional data analysis
Procedia PDF Downloads 10827195 A Security Cloud Storage Scheme Based Accountable Key-Policy Attribute-Based Encryption without Key Escrow
Authors: Ming Lun Wang, Yan Wang, Ning Ruo Sun
Abstract:
With the development of cloud computing, more and more users start to utilize the cloud storage service. However, there exist some issues: 1) cloud server steals the shared data, 2) sharers collude with the cloud server to steal the shared data, 3) cloud server tampers the shared data, 4) sharers and key generation center (KGC) conspire to steal the shared data. In this paper, we use advanced encryption standard (AES), hash algorithms, and accountable key-policy attribute-based encryption without key escrow (WOKE-AKP-ABE) to build a security cloud storage scheme. Moreover, the data are encrypted to protect the privacy. We use hash algorithms to prevent the cloud server from tampering the data uploaded to the cloud. Analysis results show that this scheme can resist conspired attacks.Keywords: cloud storage security, sharing storage, attributes, Hash algorithm
Procedia PDF Downloads 39027194 Comparison of Methodologies to Compute the Probabilistic Seismic Hazard Involving Faults and Associated Uncertainties
Authors: Aude Gounelle, Gloria Senfaute, Ludivine Saint-Mard, Thomas Chartier
Abstract:
The long-term deformation rates of faults are not fully captured by Probabilistic Seismic Hazard Assessment (PSHA). PSHA that use catalogues to develop area or smoothed-seismicity sources is limited by the data available to constraint future earthquakes activity rates. The integration of faults in PSHA can at least partially address the long-term deformation. However, careful treatment of fault sources is required, particularly, in low strain rate regions, where estimated seismic hazard levels are highly sensitive to assumptions concerning fault geometry, segmentation and slip rate. When integrating faults in PSHA various constraints on earthquake rates from geologic and seismologic data have to be satisfied. For low strain rate regions where such data is scarce it would be especially challenging. Faults in PSHA requires conversion of the geologic and seismologic data into fault geometries, slip rates and then into earthquake activity rates. Several approaches exist for translating slip rates into earthquake activity rates. In the most frequently used approach, the background earthquakes are handled using a truncated approach, in which earthquakes with a magnitude lower or equal to a threshold magnitude (Mw) occur in the background zone, with a rate defined by the rate in the earthquake catalogue. Although magnitudes higher than the threshold are located on the fault with a rate defined using the average slip rate of the fault. As high-lighted by several research, seismic events with magnitudes stronger than the selected magnitude threshold may potentially occur in the background and not only at the fault, especially in regions of slow tectonic deformation. It also has been known that several sections of a fault or several faults could rupture during a single fault-to-fault rupture. It is then essential to apply a consistent modelling procedure to allow for a large set of possible fault-to-fault ruptures to occur aleatory in the hazard model while reflecting the individual slip rate of each section of the fault. In 2019, a tool named SHERIFS (Seismic Hazard and Earthquake Rates in Fault Systems) was published. The tool is using a methodology to calculate the earthquake rates in a fault system where the slip-rate budget of each fault is conversed into rupture rates for all possible single faults and faultto-fault ruptures. The objective of this paper is to compare the SHERIFS method with one other frequently used model to analyse the impact on the seismic hazard and through sensibility studies better understand the influence of key parameters and assumptions. For this application, a simplified but realistic case study was selected, which is in an area of moderate to hight seismicity (South Est of France) and where the fault is supposed to have a low strain.Keywords: deformation rates, faults, probabilistic seismic hazard, PSHA
Procedia PDF Downloads 6627193 Passenger Preferences on Airline Check-In Methods: Traditional Counter Check-In Versus Common-Use Self-Service Kiosk
Authors: Cruz Queen Allysa Rose, Bautista Joymeeh Anne, Lantoria Kaye, Barretto Katya Louise
Abstract:
The study presents the preferences of passengers on the quality of service provided by the two airline check-in methods currently present in airports-traditional counter check-in and common-use self-service kiosks. Since a study has shown that airlines perceive self-service kiosks alone are sufficient enough to ensure adequate services and customer satisfaction, and in contrast, agents and passengers stated that it alone is not enough and that human interaction is essential. In reference with former studies that established opposing ideas about the choice of the more favorable airline check-in method to employ, it is the purpose of this study to present a recommendation that shall somehow fill-in the gap between the conflicting ideas by means of comparing the perceived quality of service through the RATER model. Furthermore, this study discusses the major competencies present in each method which are supported by the theories–FIRO Theory of Needs upholding the importance of inclusion, control and affection, and the Queueing Theory which points out the discipline of passengers and the length of the queue line as important factors affecting quality service. The findings of the study were based on the data gathered by the researchers from selected Thomasian third year and fourth year college students currently enrolled in the first semester of the academic year 2014-2015, who have already experienced both airline check-in methods through the implication of a stratified probability sampling. The statistical treatments applied in order to interpret the data were mean, frequency, standard deviation, t-test, logistic regression and chi-square test. The final point of the study revealed that there is a greater effect in passenger preference concerning the satisfaction experienced in common-use self-service kiosks in comparison with the application of the traditional counter check-in.Keywords: traditional counter check-in, common-use self-service Kiosks, airline check-in methods
Procedia PDF Downloads 40727192 Human’s Sensitive Reactions during Different Geomagnetic Activity: An Experimental Study in Natural and Simulated Conditions
Authors: Ketevan Janashia, Tamar Tsibadze, Levan Tvildiani, Nikoloz Invia, Elguja Kubaneishvili, Vasili Kukhianidze, George Ramishvili
Abstract:
This study considers the possible effects of geomagnetic activity (GMA) on humans situated on Earth by performing experiments concerning specific sensitive reactions in humans in both: natural conditions during different GMA and by the simulation of different GMA in the lab. The measurements of autonomic nervous system (ANS) responses to different GMA via measuring the heart rate variability (HRV) indices and stress index (SI) and their comparison with the K-index of GMA have been presented and discussed. The results of experiments indicate an intensification of the sympathetic part of the ANS as a stress reaction of the human organism when it is exposed to high level of GMA as natural as well as in simulated conditions. Aim: We tested the hypothesis whether the GMF when disturbed can have effects on human ANS causing specific sensitive stress-reactions depending on the initial type of ANS. Methods: The study focuses on the effects of different GMA on ANS by comparing of HRV indices and stress index (SI) of n= 78, 18-24 years old healthy male volunteers. Experiments were performed as natural conditions on days of low (K= 1-3) and high (K= 5-7) GMA as well as in the lab by the simulation of different GMA using the device of geomagnetic storm (GMS) compensation and simulation. Results: In comparison with days of low GMA (K=1-3) the initial values of HRV shifted towards the intensification of the sympathetic part (SP) of the ANS during days of GMSs (K=5-7) with statistical significance p-values: HR (heart rate, p= 0.001), SDNN (Standard deviation of all Normal to Normal intervals, p= 0.0001), RMSSD (The square root of the arithmetical mean of the sum of the squares of differences between adjacent NN intervals, p= 0.0001). In comparison with conditions during GMSs compensation mode (K= 0, B= 0-5nT), the ANS balance was observed to shift during exposure to simulated GMSs with intensities in the range of natural GMSs (K= 7, B= 200nT). However, the initial values of the ANS resulted in different dynamics in its variation depending of GMA level. In the case of initial balanced regulation type (HR > 80) significant intensification of SP was observed with p-values: HR (p= 0.0001), SDNN (p= 0.047), RMSSD (p= 0.28), LF/HF (p=0.03), SI (p= 0.02); while in the case of initial parasympathetic regulation type (HR < 80), an insignificant shift to the intensification of the parasympathetic part (PP) was observed. Conclusions: The results indicate an intensification of SP as a stress reaction of the human organism when it is exposed to high level of GMA in both natural and simulated conditions.Keywords: autonomic nervous system, device of magneto compensation/simulation, geomagnetic storms, heart rate variability
Procedia PDF Downloads 14227191 Development of Energy Benchmarks Using Mandatory Energy and Emissions Reporting Data: Ontario Post-Secondary Residences
Authors: C. Xavier Mendieta, J. J McArthur
Abstract:
Governments are playing an increasingly active role in reducing carbon emissions, and a key strategy has been the introduction of mandatory energy disclosure policies. These policies have resulted in a significant amount of publicly available data, providing researchers with a unique opportunity to develop location-specific energy and carbon emission benchmarks from this data set, which can then be used to develop building archetypes and used to inform urban energy models. This study presents the development of such a benchmark using the public reporting data. The data from Ontario’s Ministry of Energy for Post-Secondary Educational Institutions are being used to develop a series of building archetype dynamic building loads and energy benchmarks to fill a gap in the currently available building database. This paper presents the development of a benchmark for college and university residences within ASHRAE climate zone 6 areas in Ontario using the mandatory disclosure energy and greenhouse gas emissions data. The methodology presented includes data cleaning, statistical analysis, and benchmark development, and lessons learned from this investigation are presented and discussed to inform the development of future energy benchmarks from this larger data set. The key findings from this initial benchmarking study are: (1) the importance of careful data screening and outlier identification to develop a valid dataset; (2) the key features used to develop a model of the data are building age, size, and occupancy schedules and these can be used to estimate energy consumption; and (3) policy changes affecting the primary energy generation significantly affected greenhouse gas emissions, and consideration of these factors was critical to evaluate the validity of the reported data.Keywords: building archetypes, data analysis, energy benchmarks, GHG emissions
Procedia PDF Downloads 30627190 Numerical Simulation of High Strength Steel Hot-Finished Elliptical Hollow Section Subjected to Uniaxial Eccentric Compression
Authors: Zhengyi Kong, Xueqing Wang, Quang-Viet Vu
Abstract:
In this study, the structural behavior of high strength steel (HSS) hot-finished elliptical hollow section (EHS) subjected to uniaxial eccentric compression is investigated. A finite element method for predicting the cross-section resistance of HSS hot-finished EHS is developed using ABAQUS software, which is then verified by comparison with previous experiments. The validated finite element method is employed to carry out parametric studies for investigating the structural behavior of HSS hot-finished EHS under uniaxial eccentric compression and evaluate the current design guidance for HSS hot-finished EHS. Different parameters, such as the radius of the larger and smaller outer diameter of EHS, thickness of EHS, eccentricity, and material property, are considered. The resulting data from 84 finite element models are used to obtain the relationship between the cross-section resistance of HSS hot-finished EHS and cross-section slenderness. It is concluded that current design provisions, such as EN 1993-1-1, BS 5950-1, AS4100, and Gardner et al., are conservative for predicting the HSS hot-finished EHS under uniaxial eccentric compression.Keywords: hot-finished, elliptical hollow section, uniaxial eccentric compression, finite element method
Procedia PDF Downloads 13827189 Collision Detection Algorithm Based on Data Parallelism
Authors: Zhen Peng, Baifeng Wu
Abstract:
Modern computing technology enters the era of parallel computing with the trend of sustainable and scalable parallelism. Single Instruction Multiple Data (SIMD) is an important way to go along with the trend. It is able to gather more and more computing ability by increasing the number of processor cores without the need of modifying the program. Meanwhile, in the field of scientific computing and engineering design, many computation intensive applications are facing the challenge of increasingly large amount of data. Data parallel computing will be an important way to further improve the performance of these applications. In this paper, we take the accurate collision detection in building information modeling as an example. We demonstrate a model for constructing a data parallel algorithm. According to the model, a complex object is decomposed into the sets of simple objects; collision detection among complex objects is converted into those among simple objects. The resulting algorithm is a typical SIMD algorithm, and its advantages in parallelism and scalability is unparalleled in respect to the traditional algorithms.Keywords: data parallelism, collision detection, single instruction multiple data, building information modeling, continuous scalability
Procedia PDF Downloads 29027188 Changing Arbitrary Data Transmission Period by Using Bluetooth Module on Gas Sensor Node of Arduino Board
Authors: Hiesik Kim, Yong-Beom Kim, Jaheon Gu
Abstract:
Internet of Things (IoT) applications are widely serviced and spread worldwide. Local wireless data transmission technique must be developed to rate up with some technique. Bluetooth wireless data communication is wireless technique is technique made by Special Inter Group (SIG) using the frequency range 2.4 GHz, and it is exploiting Frequency Hopping to avoid collision with a different device. To implement experiment, equipment for experiment transmitting measured data is made by using Arduino as open source hardware, gas sensor, and Bluetooth module and algorithm controlling transmission rate is demonstrated. Experiment controlling transmission rate also is progressed by developing Android application receiving measured data, and controlling this rate is available at the experiment result. It is important that in the future, improvement for communication algorithm be needed because a few error occurs when data is transferred or received.Keywords: Arduino, Bluetooth, gas sensor, IoT, transmission
Procedia PDF Downloads 27827187 Real-Time Sensor Fusion for Mobile Robot Localization in an Oil and Gas Refinery
Authors: Adewole A. Ayoade, Marshall R. Sweatt, John P. H. Steele, Qi Han, Khaled Al-Wahedi, Hamad Karki, William A. Yearsley
Abstract:
Understanding the behavioral characteristics of sensors is a crucial step in fusing data from several sensors of different types. This paper introduces a practical, real-time approach to integrate heterogeneous sensor data to achieve higher accuracy than would be possible from any one individual sensor in localizing a mobile robot. We use this approach in both indoor and outdoor environments and it is especially appropriate for those environments like oil and gas refineries due to their sparse and featureless nature. We have studied the individual contribution of each sensor data to the overall combined accuracy achieved from the fusion process. A Sequential Update Extended Kalman Filter(EKF) using validation gates was used to integrate GPS data, Compass data, WiFi data, Inertial Measurement Unit(IMU) data, Vehicle Velocity, and pose estimates from Fiducial marker system. Results show that the approach can enable a mobile robot to navigate autonomously in any environment using a priori information.Keywords: inspection mobile robot, navigation, sensor fusion, sequential update extended Kalman filter
Procedia PDF Downloads 47327186 Modeling by Application of the Nernst-Planck Equation and Film Theory for Predicting of Chromium Salts through Nanofiltration Membrane
Authors: Aimad Oulebsir, Toufik Chaabane, Sivasankar Venkatramann, Andre Darchen, Rachida Maachi
Abstract:
The objective of this study is to propose a model for the prediction of the mechanism transfer of the trivalent ions through a nanofiltration membrane (NF) by introduction of the polarization concentration phenomenon and to study its influence on the retention of salts. This model is the combination of the Nernst-Planck equation and the equations of the film theory. This model is characterized by two transfer parameters: Reflection coefficient s and solute permeability Ps which are estimated numerically. The thickness of the boundary layer, δ, solute concentration at the membrane surface, Cm, and concentration profile in the polarization layer have also been estimated. The mathematical formulation suggested was established. The retentions of trivalent salts are estimated and compared with the experimental results. A comparison between the results with and without phenomena of polarization of concentration is made and the thickness of boundary layer alimentation side was given. Experimental and calculated results are shown to be in good agreement. The model is then success fully extended to experimental data reported in the literature.Keywords: nanofiltration, concentration polarisation, chromium salts, mass transfer
Procedia PDF Downloads 28227185 Energy Efficient Massive Data Dissemination Through Vehicle Mobility in Smart Cities
Authors: Salman Naseer
Abstract:
One of the main challenges of operating a smart city (SC) is collecting the massive data generated from multiple data sources (DS) and to transmit them to the control units (CU) for further data processing and analysis. These ever-increasing data demands require not only more and more capacity of the transmission channels but also results in resource over-provision to meet the resilience requirements, thus the unavoidable waste because of the data fluctuations throughout the day. In addition, the high energy consumption (EC) and carbon discharges from these data transmissions posing serious issues to the environment we live in. Therefore, to overcome the issues of intensive EC and carbon emissions (CE) of massive data dissemination in Smart Cities, we propose an energy efficient and carbon reduction approach by utilizing the daily mobility of the existing vehicles as an alternative communications channel to accommodate the data dissemination in smart cities. To illustrate the effectiveness and efficiency of our approach, we take the Auckland City in New Zealand as an example, assuming massive data generated by various sources geographically scattered throughout the Auckland region to the control centres located in city centre. The numerical results show that our proposed approach can provide up to 5 times lower delay as transferring the large volume of data by utilizing the existing daily vehicles’ mobility than the conventional transmission network. Moreover, our proposed approach offers about 30% less EC and CE than that of conventional network transmission approach.Keywords: smart city, delay tolerant network, infrastructure offloading, opportunistic network, vehicular mobility, energy consumption, carbon emission
Procedia PDF Downloads 14227184 Challenges and Professional Perspectives for Pedagogy Undergraduates with Specific Learning Disability: A Greek Case Study
Authors: Tatiani D. Mousoura
Abstract:
Specific learning disability (SLD) in higher education has been partially explored in Greece so far. Moreover, opinions on professional perspectives for university students with SLD, is scarcely encountered in Greek research. The perceptions of the hidden character of SLD along with the university policy towards it and professional perspectives that result from this policy have been examined in the present research. This study has applied the paradigm of a Greek Tertiary Pedagogical Education Department (Early Childhood Education). Via mixed methods, data have been collected from different groups of people in the Pedagogical Department: students with SLD and without SLD, academic staff and administration staff, all of which offer the opportunity for triangulation of the findings. Qualitative methods include ten interviews with students with SLD and 15 interviews with academic staff and 60 hours of observation of the students with SLD. Quantitative methods include 165 questionnaires completed by third and fourth-year students and five questionnaires completed by the administration staff. Thematic analyses of the interviews’ data and descriptive statistics on the questionnaires’ data have been applied for the processing of the results. The use of medical terms to define and understand SLD was common in the student cohort, regardless of them having an SLD diagnosis. However, this medical model approach is far more dominant in the group of students without SLD who, by majority, hold misconceptions on a definitional level. The academic staff group seems to be leaning towards a social approach concerning SLD. According to them, diagnoses may lead to social exclusion. The Pedagogical Department generally endorses the principles of inclusion and complies with the provision of oral exams for students with SLD. Nevertheless, in practice, there seems to be a lack of regular academic support for these students. When such support does exist, it is only through individual initiatives. With regards to their prospective profession, students with SLD can utilize their personal experience, as well as their empathy; these appear to be unique weapons in their hands –in comparison with other educators− when it comes to teaching students in the future. In the Department of Pedagogy, provision towards SLD results sporadic, however the vision of an inclusive department does exist. Based on their studies and their experience, pedagogy students with SLD claim that they have an experiential internalized advantage for their future career as educators.Keywords: specific learning disability, SLD, dyslexia, pedagogy department, inclusion, professional role of SLDed educators, higher education, university policy
Procedia PDF Downloads 11327183 The Development of a Low Carbon Cementitious Material Produced from Cement, Ground Granulated Blast Furnace Slag and High Calcium Fly Ash
Authors: Ali Shubbar, Hassnen M. Jafer, Anmar Dulaimi, William Atherton, Ali Al-Rifaie
Abstract:
This research represents experimental work for investigation of the influence of utilising Ground Granulated Blast Furnace Slag (GGBS) and High Calcium Fly Ash (HCFA) as a partial replacement for Ordinary Portland Cement (OPC) and produce a low carbon cementitious material with comparable compressive strength to OPC. Firstly, GGBS was used as a partial replacement to OPC to produce a binary blended cementitious material (BBCM); the replacements were 0, 10, 15, 20, 25, 30, 35, 40, 45 and 50% by the dry mass of OPC. The optimum BBCM was mixed with HCFA to produce a ternary blended cementitious material (TBCM). The replacements were 0, 10, 15, 20, 25, 30, 35, 40, 45 and 50% by the dry mass of BBCM. The compressive strength at ages of 7 and 28 days was utilised for assessing the performance of the test specimens in comparison to the reference mixture using 100% OPC as a binder. The results showed that the optimum BBCM was the mix produced from 25% GGBS and 75% OPC with compressive strength of 32.2 MPa at the age of 28 days. In addition, the results of the TBCM have shown that the addition of 10, 15, 20 and 25% of HCFA to the optimum BBCM improved the compressive strength by 22.7, 11.3, 5.2 and 2.1% respectively at 28 days. However, the replacement of optimum BBCM with more than 25% HCFA have showed a gradual drop in the compressive strength in comparison to the control mix. TBCM with 25% HCFA was considered to be the optimum as it showed better compressive strength than the control mix and at the same time reduced the amount of cement to 56%. Reducing the cement content to 56% will contribute to decrease the cost of construction materials, provide better compressive strength and also reduce the CO2 emissions into the atmosphere.Keywords: cementitious material, compressive strength, GGBS, HCFA, OPC
Procedia PDF Downloads 19427182 Application of the Finite Window Method to a Time-Dependent Convection-Diffusion Equation
Authors: Raoul Ouambo Tobou, Alexis Kuitche, Marcel Edoun
Abstract:
The FWM (Finite Window Method) is a new numerical meshfree technique for solving problems defined either in terms of PDEs (Partial Differential Equation) or by a set of conservation/equilibrium laws. The principle behind the FWM is that in such problem each element of the concerned domain is interacting with its neighbors and will always try to adapt to keep in equilibrium with respect to those neighbors. This leads to a very simple and robust problem solving scheme, well suited for transfer problems. In this work, we have applied the FWM to an unsteady scalar convection-diffusion equation. Despite its simplicity, it is well known that convection-diffusion problems can be challenging to be solved numerically, especially when convection is highly dominant. This has led researchers to set the scalar convection-diffusion equation as a benchmark one used to analyze and derive the required conditions or artifacts needed to numerically solve problems where convection and diffusion occur simultaneously. We have shown here that the standard FWM can be used to solve convection-diffusion equations in a robust manner as no adjustments (Upwinding or Artificial Diffusion addition) were required to obtain good results even for high Peclet numbers and coarse space and time steps. A comparison was performed between the FWM scheme and both a first order implicit Finite Volume Scheme (Upwind scheme) and a third order implicit Finite Volume Scheme (QUICK Scheme). The results of the comparison was that for equal space and time grid spacing, the FWM yields a much better precision than the used Finite Volume schemes, all having similar computational cost and conditioning number.Keywords: Finite Window Method, Convection-Diffusion, Numerical Technique, Convergence
Procedia PDF Downloads 33227181 Exploring Data Stewardship in Fog Networking Using Blockchain Algorithm
Authors: Ruvaitha Banu, Amaladhithyan Krishnamoorthy
Abstract:
IoT networks today solve various consumer problems, from home automation systems to aiding in driving autonomous vehicles with the exploration of multiple devices. For example, in an autonomous vehicle environment, multiple sensors are available on roads to monitor weather and road conditions and interact with each other to aid the vehicle in reaching its destination safely and timely. IoT systems are predominantly dependent on the cloud environment for data storage, and computing needs that result in latency problems. With the advent of Fog networks, some of this storage and computing is pushed to the edge/fog nodes, saving the network bandwidth and reducing the latency proportionally. Managing the data stored in these fog nodes becomes crucial as it might also store sensitive information required for a certain application. Data management in fog nodes is strenuous because Fog networks are dynamic in terms of their availability and hardware capability. It becomes more challenging when the nodes in the network also live a short span, detaching and joining frequently. When an end-user or Fog Node wants to access, read, or write data stored in another Fog Node, then a new protocol becomes necessary to access/manage the data stored in the fog devices as a conventional static way of managing the data doesn’t work in Fog Networks. The proposed solution discusses a protocol that acts by defining sensitivity levels for the data being written and read. Additionally, a distinct data distribution and replication model among the Fog nodes is established to decentralize the access mechanism. In this paper, the proposed model implements stewardship towards the data stored in the Fog node using the application of Reinforcement Learning so that access to the data is determined dynamically based on the requests.Keywords: IoT, fog networks, data stewardship, dynamic access policy
Procedia PDF Downloads 5927180 An Automated Approach to Consolidate Galileo System Availability
Authors: Marie Bieber, Fabrice Cosson, Olivier Schmitt
Abstract:
Europe's Global Navigation Satellite System, Galileo, provides worldwide positioning and navigation services. The satellites in space are only one part of the Galileo system. An extensive ground infrastructure is essential to oversee the satellites and ensure accurate navigation signals. High reliability and availability of the entire Galileo system are crucial to continuously provide positioning information of high quality to users. Outages are tracked, and operational availability is regularly assessed. A highly flexible and adaptive tool has been developed to automate the Galileo system availability analysis. Not only does it enable a quick availability consolidation, but it also provides first steps towards improving the data quality of maintenance tickets used for the analysis. This includes data import and data preparation, with a focus on processing strings used for classification and identifying faulty data. Furthermore, the tool allows to handle a low amount of data, which is a major constraint when the aim is to provide accurate statistics.Keywords: availability, data quality, system performance, Galileo, aerospace
Procedia PDF Downloads 16727179 Evaluation of Simulated Noise Levels through the Analysis of Temperature and Rainfall: A Case Study of Nairobi Central Business District
Authors: Emmanuel Yussuf, John Muthama, John Ng'ang'A
Abstract:
There has been increasing noise levels all over the world in the last decade. Many factors contribute to this increase, which is causing health related effects to humans. Developing countries are not left out of the whole picture as they are still growing and advancing their development. Motor vehicles are increasing on urban roads; there is an increase in infrastructure due to the rising population, increasing number of industries to provide goods and so many other activities. All this activities lead to the high noise levels in cities. This study was conducted in Nairobi’s Central Business District (CBD) with the main objective of simulating noise levels in order to understand the noise exposed to the people within the urban area, in relation to weather parameters namely temperature, rainfall and wind field. The study was achieved using the Neighbourhood Proximity Model and Time Series Analysis, with data obtained from proxies/remotely-sensed from satellites, in order to establish the levels of noise exposed to which people of Nairobi CBD are exposed to. The findings showed that there is an increase in temperature (0.1°C per year) and a decrease in precipitation (40 mm per year), which in comparison to the noise levels in the area, are increasing. The study also found out that noise levels exposed to people in Nairobi CBD were roughly between 61 and 63 decibels and has been increasing, a level which is high and likely to cause adverse physical and psychological effects on the human body in which air temperature, precipitation and wind contribute so much in the spread of noise. As a noise reduction measure, the use of sound proof materials in buildings close to busy roads, implementation of strict laws to most emitting sources as well as further research on the study was recommended. The data used for this study ranged from the year 2000 to 2015, rainfall being in millimeters (mm), temperature in degrees Celsius (°C) and the urban form characteristics being in meters (m).Keywords: simulation, noise exposure, weather, proxy
Procedia PDF Downloads 37927178 Use of In-line Data Analytics and Empirical Model for Early Fault Detection
Authors: Hyun-Woo Cho
Abstract:
Automatic process monitoring schemes are designed to give early warnings for unusual process events or abnormalities as soon as possible. For this end, various techniques have been developed and utilized in various industrial processes. It includes multivariate statistical methods, representation skills in reduced spaces, kernel-based nonlinear techniques, etc. This work presents a nonlinear empirical monitoring scheme for batch type production processes with incomplete process measurement data. While normal operation data are easy to get, unusual fault data occurs infrequently and thus are difficult to collect. In this work, noise filtering steps are added in order to enhance monitoring performance by eliminating irrelevant information of the data. The performance of the monitoring scheme was demonstrated using batch process data. The results showed that the monitoring performance was improved significantly in terms of detection success rate of process fault.Keywords: batch process, monitoring, measurement, kernel method
Procedia PDF Downloads 323