Search results for: Displacement deviation analysis
5492 Digital Twin of Real Electrical Distribution System with Real Time Recursive Load Flow Calculation and State Estimation
Authors: Anosh Arshad Sundhu, Francesco Giordano, Giacomo Della Croce, Maurizio Arnone
Abstract:
Digital Twin (DT) is a technology that generates a virtual representation of a physical system or process, enabling real-time monitoring, analysis, and simulation. DT of an Electrical Distribution System (EDS) can perform online analysis by integrating the static and real-time data in order to show the current grid status and predictions about the future status to the Distribution System Operator (DSO), producers and consumers. DT technology for EDS also offers the opportunity to DSO to test hypothetical scenarios. This paper discusses the development of a DT of an EDS by Smart Grid Controller (SGC) application, which is developed using open-source libraries and languages. The developed application can be integrated with Supervisory Control and Data Acquisition System (SCADA) of any EDS for creating the DT. The paper shows the performance of developed tools inside the application, tested on real EDS for grid observability, Smart Recursive Load Flow (SRLF) calculation and state estimation of loads in MV feeders.
Keywords: Digital Twin, Distribution System Operator, Electrical Distribution System, Smart Grid Controller, Supervisory Control and Data Acquisition System, Smart Recursive Load Flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2635491 In-Flight Radiometric Performances Analysis of an Airborne Optical Payload
Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yaokai Liu, Xinhong Wang, Yongsheng Zhou
Abstract:
Performances analysis of remote sensing sensor is required to pursue a range of scientific research and application objectives. Laboratory analysis of any remote sensing instrument is essential, but not sufficient to establish a valid inflight one. In this study, with the aid of the in situ measurements and corresponding image of three-gray scale permanent artificial target, the in-flight radiometric performances analyses (in-flight radiometric calibration, dynamic range and response linearity, signal-noise-ratio (SNR), radiometric resolution) of self-developed short-wave infrared (SWIR) camera are performed. To acquire the inflight calibration coefficients of the SWIR camera, the at-sensor radiances (Li) for the artificial targets are firstly simulated with in situ measurements (atmosphere parameter and spectral reflectance of the target) and viewing geometries using MODTRAN model. With these radiances and the corresponding digital numbers (DN) in the image, a straight line with a formulation of L = G × DN + B is fitted by a minimization regression method, and the fitted coefficients, G and B, are inflight calibration coefficients. And then the high point (LH) and the low point (LL) of dynamic range can be described as LH= (G × DNH + B) and LL= B, respectively, where DNH is equal to 2n − 1 (n is the quantization number of the payload). Meanwhile, the sensor’s response linearity (δ) is described as the correlation coefficient of the regressed line. The results show that the calibration coefficients (G and B) are 0.0083 W·sr−1m−2µm−1 and −3.5 W·sr−1m−2µm−1; the low point of dynamic range is −3.5 W·sr−1m−2µm−1 and the high point is 30.5 W·sr−1m−2µm−1; the response linearity is approximately 99%. Furthermore, a SNR normalization method is used to assess the sensor’s SNR, and the normalized SNR is about 59.6 when the mean value of radiance is equal to 11.0 W·sr−1m−2µm−1; subsequently, the radiometric resolution is calculated about 0.1845 W•sr-1m-2μm-1. Moreover, in order to validate the result, a comparison of the measured radiance with a radiative-transfer-code-predicted over four portable artificial targets with reflectance of 20%, 30%, 40%, 50% respectively, is performed. It is noted that relative error for the calibration is within 6.6%.
Keywords: Calibration, dynamic range, radiometric resolution, SNR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13455490 ParkedGuard: An Efficient and Accurate Parked Domain Detection System Using Graphical Locality Analysis and Coarse-To-Fine Strategy
Authors: Chia-Min Lai, Wan-Ching Lin, Hahn-Ming Lee, Ching-Hao Mao
Abstract:
As world wild internet has non-stop developments, making profit by lending registered domain names emerges as a new business in recent years. Unfortunately, the larger the market scale of domain lending service becomes, the riskier that there exist malicious behaviors or malwares hiding behind parked domains will be. Also, previous work for differentiating parked domain suffers two main defects: 1) too much data-collecting effort and CPU latency needed for features engineering and 2) ineffectiveness when detecting parked domains containing external links that are usually abused by hackers, e.g., drive-by download attack. Aiming for alleviating above defects without sacrificing practical usability, this paper proposes ParkedGuard as an efficient and accurate parked domain detector. Several scripting behavioral features were analyzed, while those with special statistical significance are adopted in ParkedGuard to make feature engineering much more cost-efficient. On the other hand, finding memberships between external links and parked domains was modeled as a graph mining problem, and a coarse-to-fine strategy was elaborately designed by leverage the graphical locality such that ParkedGuard outperforms the state-of-the-art in terms of both recall and precision rates.Keywords: Coarse-to-fine strategy, domain parking service, graphical locality analysis, parked domain.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12585489 The Effects and Interactions of Synthesis Parameters on Properties of Mg Substituted Hydroxyapatite
Authors: S. Sharma, U. Batra, S. Kapoor, A. Dua
Abstract:
In this study, the effects and interactions of reaction time and capping agent assistance during sol-gel synthesis of magnesium substituted hydroxyapatite nanopowder (MgHA) on hydroxyapatite (HA) to β-tricalcium phosphate (β-TCP) ratio, Ca/P ratio and mean crystallite size was examined experimentally as well as through statistical analysis. MgHA nanopowders were synthesized by sol-gel technique at room temperature using aqueous solution of calcium nitrate tetrahydrate, magnesium nitrate hexahydrate and potassium dihydrogen phosphate as starting materials. The reaction time for sol-gel synthesis was varied between 15 to 60 minutes. Two process routes were followed with and without addition of triethanolamine (TEA) in the solutions. The elemental compositions of as-synthesized powders were determined using X-ray fluorescence (XRF) spectroscopy. The functional groups present in the assynthesized MgHA nanopowders were established through Fourier Transform Infrared Spectroscopy (FTIR). The amounts of phases present, Ca/P ratio and mean crystallite sizes of MgHA nanopowders were determined using X-ray diffraction (XRD). The HA content in biphasic mixture of HA and β-TCP and Ca/P ratio in as-synthesized MgHA nanopowders increased effectively with reaction time of sols (p<0.0001, two way ANOVA), however, these were independent of TEA addition (p>0.15, two way ANOVA). The MgHA nanopowders synthesized with TEA assistance exhibited 14 nm lower crystallite size (p<0.018, 2 sample t-test) compared to the powder synthesized without TEA assistance.Keywords: Capping agent, hydroxyapatite, regression analysis, sol-gel, 2- sample t-test, two-way ANOVA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16215488 Hydrological Characterization of a Watershed for Streamflow Prediction
Authors: Oseni Taiwo Amoo, Bloodless Dzwairo
Abstract:
In this paper, we extend the versatility and usefulness of GIS as a methodology for any river basin hydrologic characteristics analysis (HCA). The Gurara River basin located in North-Central Nigeria is presented in this study. It is an on-going research using spatial Digital Elevation Model (DEM) and Arc-Hydro tools to take inventory of the basin characteristics in order to predict water abstraction quantification on streamflow regime. One of the main concerns of hydrological modelling is the quantification of runoff from rainstorm events. In practice, the soil conservation service curve (SCS) method and the Conventional procedure called rational technique are still generally used these traditional hydrological lumped models convert statistical properties of rainfall in river basin to observed runoff and hydrograph. However, the models give little or no information about spatially dispersed information on rainfall and basin physical characteristics. Therefore, this paper synthesizes morphometric parameters in generating runoff. The expected results of the basin characteristics such as size, area, shape, slope of the watershed and stream distribution network analysis could be useful in estimating streamflow discharge. Water resources managers and irrigation farmers could utilize the tool for determining net return from available scarce water resources, where past data records are sparse for the aspect of land and climate.
Keywords: Hydrological characteristic, land and climate, runoff discharge, streamflow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14695487 Emergency Generator Sizing and Motor Starting Analysis
Authors: Mukesh Kumar Kirar, Ganga Agnihotri
Abstract:
This paper investigates the preliminary sizing of generator set to design electrical system at the early phase of a project, dynamic behavior of generator-unit, as well as induction motors, during start-up of the induction motor drives fed from emergency generator unit. The information in this paper simplifies generator set selection and eliminates common errors in selection. It covers load estimation, step loading capacity test, transient analysis for the emergency generator set. The dynamic behavior of the generator-unit, power, power factor, voltage, during Direct-on-Line start-up of the induction motor drives fed from stand alone gene-set is also discussed. It is important to ensure that plant generators operate safely and consistently, power system studies are required at the planning and conceptual design stage of the project. The most widely recognized and studied effect of motor starting is the voltage dip that is experienced throughout an industrial power system as the direct online result of starting large motors. Generator step loading capability and transient voltage dip during starting of largest motor is ensured with the help of Electrical Transient Analyzer Program (ETAP).
Keywords: Sizing, induction motor starting, load estimation, Transient Analyzer Program (ETAP).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 139845486 Generation of Catalytic Films of Zeolite Y and ZSM-5 on FeCrAlloy Metal
Authors: Rana Th. A. Al-Rubaye, Arthur A. Garforth
Abstract:
This work details the generation of thin films of structured zeolite catalysts (ZSM–5 and Y) onto the surface of a metal substrate (FeCrAlloy) using in-situ hydrothermal synthesis. In addition, the zeolite Y is post-synthetically modified by acidified ammonium ion exchange to generate US-Y. Finally the catalytic activity of the structured ZSM-5 catalyst films (Si/Al = 11, thickness 146 0m) and structured US–Y catalyst film (Si/Al = 8, thickness 230m) were compared with the pelleted powder form of ZSM–5 and USY catalysts of similar Si/Al ratios. The structured catalyst films have been characterised using a range of techniques, including X-ray diffraction (XRD), Electron microscopy (SEM), Energy Dispersive X–ray analysis (EDX) and Thermogravimetric Analysis (TGA). The transition from oxide-onalloy wires to hydrothermally synthesised uniformly zeolite coated surfaces was followed using SEM and XRD. In addition, the robustness of the prepared coating was confirmed by subjecting these to thermal cycling (ambient to 550oC). The cracking of n–heptane over the pellets and structured catalysts for both ZSM–5 and Y zeolite showed very similar product selectivities for similar amounts of catalyst with an apparent activation energy of around 60 kJ mol-1. This paper demonstrates that structured catalysts can be manufactured with excellent zeolite adherence and when suitably activated/modified give comparable cracking results to the pelleted powder forms. These structured catalysts will improve temperature distribution in highly exothermic and endothermic catalysed processes.
Keywords: FeCrAlloy, Structured catalyst, and Zeolite Y, Zeolite ZSM-5.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31875485 Buckling of Plates on Foundation with Different Types of Sides Support
Authors: Ali N. Suri, Ahmad A. Al-Makhlufi
Abstract:
In this paper the problem of buckling of plates on foundation of finite length and with different side support is studied.
The Finite Strip Method is used as tool for the analysis. This method uses finite strip elastic, foundation, and geometric matrices to build the assembly matrices for the whole structure, then after introducing boundary conditions at supports, the resulting reduced matrices is transformed into a standard Eigenvalue-Eigenvector problem. The solution of this problem will enable the determination of the buckling load, the associated buckling modes and the buckling wave length.
To carry out the buckling analysis starting from the elastic, foundation, and geometric stiffness matrices for each strip a computer program FORTRAN list is developed.
Since stiffness matrices are function of wave length of buckling, the computer program used an iteration procedure to find the critical buckling stress for each value of foundation modulus and for each boundary condition.
The results showed the use of elastic medium to support plates subject to axial load increase a great deal the buckling load, the results found are very close with those obtained by other analytical methods and experimental work.
The results also showed that foundation compensates the effect of the weakness of some types of constraint of side support and maximum benefit found for plate with one side simply supported the other free.
Keywords: Buckling, Finite Strip, Different Sides Support, Plates on Foundation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21545484 Achieving Design-Stage Elemental Cost Planning Accuracy: Case Study of New Zealand
Authors: Johnson Adafin, James O. B. Rotimi, Suzanne Wilkinson, Abimbola O. Windapo
Abstract:
An aspect of client expenditure management that requires attention is the level of accuracy achievable in design-stage elemental cost planning. This has been a major concern for construction clients and practitioners in New Zealand (NZ). Pre-tender estimating inaccuracies are significantly influenced by the level of risk information available to estimators. Proper cost planning activities should ensure the production of a project’s likely construction costs (initial and final), and subsequent cost control activities should prevent unpleasant consequences of cost overruns, disputes and project abandonment. If risks were properly identified and priced at the design stage, observed variance between design-stage elemental cost plans (ECPs) and final tender sums (FTS) (initial contract sums) could be reduced. This study investigates the variations between design-stage ECPs and FTS of construction projects, with a view to identifying risk factors that are responsible for the observed variance. Data were sourced through interviews, and risk factors were identified by using thematic analysis. Access was obtained to project files from the records of study participants (consultant quantity surveyors), and document analysis was employed in complementing the responses from the interviews. Study findings revealed the discrepancies between ECPs and FTS in the region of -14% and +16%. It is opined in this study that the identified risk factors were responsible for the variability observed. The values obtained from the analysis would enable greater accuracy in the forecast of FTS by Quantity Surveyors. Further, whilst inherent risks in construction project developments are observed globally, these findings have important ramifications for construction projects by expanding existing knowledge on what is needed for reasonable budgetary performance and successful delivery of construction projects. The findings contribute significantly to the study by providing quantitative confirmation to justify the theoretical conclusions generated in the literature from around the world. This therefore adds to and consolidates existing knowledge.
Keywords: Accuracy, design-stage, elemental cost plan, final tender sum, New Zealand.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18065483 Numerical Investigation of Nanofluid Based Thermosyphon System
Authors: Kiran Kumar K, Ramesh Babu Bejjam, Atul Najan
Abstract:
A thermosyphon system is a heat transfer loop which operates on the basis of gravity and buoyancy forces. It guarantees a good reliability and low maintenance cost as it does not involve any mechanical pump. Therefore, it can be used in many industrial applications such as refrigeration and air conditioning, electronic cooling, nuclear reactors, geothermal heat extraction, etc. But flow instabilities and loop configuration are the major problems in this system. Several previous researchers studied that stabilities can be suppressed by using nanofluids as loop fluid. In the present study a rectangular thermosyphon loop with end heat exchangers are considered for the study. This configuration is more appropriate for many practical applications such as solar water heater, geothermal heat extraction, etc. In the present work, steady-state analysis is carried out on thermosyphon loop with parallel flow coaxial heat exchangers at heat source and heat sink. In this loop nanofluid is considered as the loop fluid and water is considered as the external fluid in both hot and cold heat exchangers. For this analysis onedimensional homogeneous model is developed. In this model, conservation equations like conservation of mass, momentum, energy are discretized using finite difference method. A computer code is written in MATLAB to simulate the flow in thermosyphon loop. A comparison in terms of heat transfer is made between water and nanofluid as working fluids in the loop.
Keywords: Heat exchanger, Heat transfer, Nanofluid, Thermosyphon loop.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25065482 The Effectiveness of Video Clips to Enhance Students’ Achievement and Motivation on History Learning and Facilitation
Authors: L. Bih Ni, D. Norizah Ag Kiflee, T. Choon Keong, R. Talip, S. Singh Bikar Singh, M. Noor Mad Japuni, R. Talin
Abstract:
The purpose of this study is to determine the effectiveness of video clips to enhance students' achievement and motivation towards learning and facilitating of history. We use narrative literature studies to illustrate the current state of the two art and science in focused areas of inquiry. We used experimental method. The experimental method is a systematic scientific research method in which the researchers manipulate one or more variables to control and measure any changes in other variables. For this purpose, two experimental groups have been designed: one experimental and one groups consisting of 30 lower secondary students. The session is given to the first batch using a computer presentation program that uses video clips to be considered as experimental group, while the second group is assigned as the same class using traditional methods using dialogue and discussion techniques that are considered a control group. Both groups are subject to pre and post-trial in matters that are handled by the class. The findings show that the results of the pre-test analysis did not show statistically significant differences, which in turn proved the equality of the two groups. Meanwhile, post-test analysis results show that there was a statistically significant difference between the experimental group and the control group at an importance level of 0.05 for the benefit of the experimental group.
Keywords: Video clips, Historical Learning and Facilitation, Achievement, Motivation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9525481 Gassing Tendency of Natural Ester Based Transformer Oils: Low Ethane Generation in Stray Gassing Behavior
Authors: Banti Sidhiwala, T. C. S. M. Gupta
Abstract:
Mineral oils of naphthenic and paraffinic type are in use as insulating liquids in the transformer applications to protect solid insulation from moisture and ensures effective heat transfer/cooling. The performance of these type of oils have been proven in the field over many decades and the condition monitoring and diagnosis of transformer performance have been successfully monitored through oil properties and dissolved gas analysis methods successfully. Different types of gases can represent various types of faults that may occur due to faulty components or unfavorable operating conditions. A large amount of database has been generated in the industry for dissolved gas analysis in mineral oil-based transformer oils, and various models have been developed to predict faults and analyze data. Additionally, oil specifications and standards have been updated to include stray gassing limits that cover low-temperature faults. This modification has become an effective preventative maintenance tool that can help greatly in understanding the reasons for breakdowns of electrical insulating materials and related components. Natural esters have seen a rise in popularity in recent years due to their "green" credentials. Some of its benefits include biodegradability, a higher fire point, improvement in load capability of transformer and improved solid insulation life than mineral oils. However, the stray gassing test shows that hydrogen and hydrocarbons like methane (CH4) and ethane (C2H6) show very high values which are much higher than the limits of mineral oil standards. Though the standards for these types of esters are yet to be evolved, the higher values of hydrocarbon gases that are available in the market is of concern which might be interpreted as a fault in transformer operation. The current paper focuses on developing a class of natural esters with low levels of stray gassing by American Society for Testing and Materials (ASTM) and International Electric Council (IEC) methods much lower values compared to the natural ester-based products reported in the literature. The experimental results of products are explained.
Keywords: Biodegradability, fire point, dissolved gas analysis, natural ester, stray gassing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2005480 A Comparative Analysis of Performance and QoS Issues in MANETs
Authors: Javed Parvez, Mushtaq Ahmad Peer
Abstract:
Mobile Ad hoc networks (MANETs) are collections of wireless mobile nodes dynamically reconfiguring and collectively forming a temporary network. These types of networks assume existence of no fixed infrastructure and are often useful in battle-field tactical operations or emergency search-and-rescue type of operations where fixed infrastructure is neither feasible nor practical. They also find use in ad hoc conferences, campus networks and commercial recreational applications carrying multimedia traffic. All of the above applications of MANETs require guaranteed levels of performance as experienced by the end-user. This paper focuses on key challenges in provisioning predetermined levels of such Quality of Service (QoS). It also identifies functional areas where QoS models are currently defined and used. Evolving functional areas where performance and QoS provisioning may be applied are also identified and some suggestions are provided for further research in this area. Although each of the above functional areas have been discussed separately in recent research studies, since these QoS functional areas are highly correlated and interdependent, a comprehensive and comparative analysis of these areas and their interrelationships is desired. In this paper we have attempted to provide such an overview.Keywords: Bandwidth Reservation, Congestion, DynamicNetwork Topology, End-to-End Delay, Flexible QoS Model forMANET(FQMM), Hidden Terminal, Mobile AdhocNetwork(MANET), Packet Jitter, Queuing, Quality-of-Service(QoS), Relative Bandwidth Service Differentiation(RBSD), Resource ReSerVation Protocol (RSVP).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21495479 Fault Detection and Diagnosis of Broken Bar Problem in Induction Motors Base Wavelet Analysis and EMD Method: Case Study of Mobarakeh Steel Company in Iran
Authors: M. Ahmadi, M. Kafil, H. Ebrahimi
Abstract:
Nowadays, induction motors have a significant role in industries. Condition monitoring (CM) of this equipment has gained a remarkable importance during recent years due to huge production losses, substantial imposed costs and increases in vulnerability, risk, and uncertainty levels. Motor current signature analysis (MCSA) is one of the most important techniques in CM. This method can be used for rotor broken bars detection. Signal processing methods such as Fast Fourier transformation (FFT), Wavelet transformation and Empirical Mode Decomposition (EMD) are used for analyzing MCSA output data. In this study, these signal processing methods are used for broken bar problem detection of Mobarakeh steel company induction motors. Based on wavelet transformation method, an index for fault detection, CF, is introduced which is the variation of maximum to the mean of wavelet transformation coefficients. We find that, in the broken bar condition, the amount of CF factor is greater than the healthy condition. Based on EMD method, the energy of intrinsic mode functions (IMF) is calculated and finds that when motor bars become broken the energy of IMFs increases.
Keywords: Broken bar, condition monitoring, diagnostics, empirical mode decomposition, Fourier transform, wavelet transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8065478 Comparison between Deterministic and Probabilistic Stability Analysis, Featuring Consequent Risk Assessment
Authors: Isabela Moreira Queiroz
Abstract:
Slope stability analyses are largely carried out by deterministic methods and evaluated through a single security factor. Although it is known that the geotechnical parameters can present great dispersal, such analyses are considered fixed and known. The probabilistic methods, in turn, incorporate the variability of input key parameters (random variables), resulting in a range of values of safety factors, thus enabling the determination of the probability of failure, which is an essential parameter in the calculation of the risk (probability multiplied by the consequence of the event). Among the probabilistic methods, there are three frequently used methods in geotechnical society: FOSM (First-Order, Second-Moment), Rosenblueth (Point Estimates) and Monte Carlo. This paper presents a comparison between the results from deterministic and probabilistic analyses (FOSM method, Monte Carlo and Rosenblueth) applied to a hypothetical slope. The end was held to evaluate the behavior of the slope and consequent risk analysis, which is used to calculate the risk and analyze their mitigation and control solutions. It can be observed that the results obtained by the three probabilistic methods were quite close. It should be noticed that the calculation of the risk makes it possible to list the priority to the implementation of mitigation measures. Therefore, it is recommended to do a good assessment of the geological-geotechnical model incorporating the uncertainty in viability, design, construction, operation and closure by means of risk management.Keywords: Probabilistic methods, risk assessment, risk management, slope stability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17455477 Towards Real-Time Classification of Finger Movement Direction Using Encephalography Independent Components
Authors: Mohamed Mounir Tellache, Hiroyuki Kambara, Yasuharu Koike, Makoto Miyakoshi, Natsue Yoshimura
Abstract:
This study explores the practicality of using electroencephalographic (EEG) independent components to predict eight-direction finger movements in pseudo-real-time. Six healthy participants with individual-head MRI images performed finger movements in eight directions with two different arm configurations. The analysis was performed in two stages. The first stage consisted of using independent component analysis (ICA) to separate the signals representing brain activity from non-brain activity signals and to obtain the unmixing matrix. The resulting independent components (ICs) were checked, and those reflecting brain-activity were selected. Finally, the time series of the selected ICs were used to predict eight finger-movement directions using Sparse Logistic Regression (SLR). The second stage consisted of using the previously obtained unmixing matrix, the selected ICs, and the model obtained by applying SLR to classify a different EEG dataset. This method was applied to two different settings, namely the single-participant level and the group-level. For the single-participant level, the EEG dataset used in the first stage and the EEG dataset used in the second stage originated from the same participant. For the group-level, the EEG datasets used in the first stage were constructed by temporally concatenating each combination without repetition of the EEG datasets of five participants out of six, whereas the EEG dataset used in the second stage originated from the remaining participants. The average test classification results across datasets (mean ± S.D.) were 38.62 ± 8.36% for the single-participant, which was significantly higher than the chance level (12.50 ± 0.01%), and 27.26 ± 4.39% for the group-level which was also significantly higher than the chance level (12.49% ± 0.01%). The classification accuracy within [–45°, 45°] of the true direction is 70.03 ± 8.14% for single-participant and 62.63 ± 6.07% for group-level which may be promising for some real-life applications. Clustering and contribution analyses further revealed the brain regions involved in finger movement and the temporal aspect of their contribution to the classification. These results showed the possibility of using the ICA-based method in combination with other methods to build a real-time system to control prostheses.Keywords: Brain-computer interface, BCI, electroencephalography, EEG, finger motion decoding, independent component analysis, pseudo-real-time motion decoding.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6055476 Material and Parameter Analysis of the PolyJet Process for Mold Making Using Design of Experiments
Authors: A. Kampker, K. Kreisköther, C. Reinders
Abstract:
Since additive manufacturing technologies constantly advance, the use of this technology in mold making seems reasonable. Many manufacturers of additive manufacturing machines, however, do not offer any suggestions on how to parameterize the machine to achieve optimal results for mold making. The purpose of this research is to determine the interdependencies of different materials and parameters within the PolyJet process by using design of experiments (DoE), to additively manufacture molds, e.g. for thermoforming and injection molding applications. Therefore, the general requirements of thermoforming molds, such as heat resistance, surface quality and hardness, have been identified. Then, different materials and parameters of the PolyJet process, such as the orientation of the printed part, the layer thickness, the printing mode (matte or glossy), the distance between printed parts and the scaling of parts, have been examined. The multifactorial analysis covers the following properties of the printed samples: Tensile strength, tensile modulus, bending strength, elongation at break, surface quality, heat deflection temperature and surface hardness. The key objective of this research is that by joining the results from the DoE with the requirements of the mold making, optimal and tailored molds can be additively manufactured with the PolyJet process. These additively manufactured molds can then be used in prototyping processes, in process testing and in small to medium batch production.
Keywords: Additive manufacturing, design of experiments, mold making, PolyJet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17345475 Development and Validation of an Instrument Measuring the Coping Strategies in Situations of Stress
Authors: Lucie Côté, Martin Lauzier, Guy Beauchamp, France Guertin
Abstract:
Stress causes deleterious effects to the physical, psychological and organizational levels, which highlight the need to use effective coping strategies to deal with it. Several coping models exist, but they don’t integrate the different strategies in a coherent way nor do they take into account the new research on the emotional coping and acceptance of the stressful situation. To fill these gaps, an integrative model incorporating the main coping strategies was developed. This model arises from the review of the scientific literature on coping and from a qualitative study carried out among workers with low or high levels of stress, as well as from an analysis of clinical cases. The model allows one to understand under what circumstances the strategies are effective or ineffective and to learn how one might use them more wisely. It includes Specific Strategies in controllable situations (the Modification of the Situation and the Resignation-Disempowerment), Specific Strategies in non-controllable situations (Acceptance and Stubborn Relentlessness) as well as so-called General Strategies (Wellbeing and Avoidance). This study is intended to undertake and present the process of development and validation of an instrument to measure coping strategies based on this model. An initial pool of items has been generated from the conceptual definitions and three expert judges have validated the content. Of these, 18 items have been selected for a short form questionnaire. A sample of 300 students and employees from a Quebec university was used for the validation of the questionnaire. Concerning the reliability of the instrument, the indices observed following the inter-rater agreement (Krippendorff’s alpha) and the calculation of the coefficients for internal consistency (Cronbach's alpha) are satisfactory. To evaluate the construct validity, a confirmatory factor analysis using MPlus supports the existence of a model with six factors. The results of this analysis suggest also that this configuration is superior to other alternative models. The correlations show that the factors are only loosely related to each other. Overall, the analyses carried out suggest that the instrument has good psychometric qualities and demonstrates the relevance of further work to establish predictive validity and reconfirm its structure. This instrument will help researchers and clinicians better understand and assess coping strategies to cope with stress and thus prevent mental health issues.
Keywords: Acceptance, coping strategies, measurement instrument, questionnaire, stress, validation process.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9275474 Viability of Rice Husk Ash Concrete Brick/Block from Green Electricity in Bangladesh
Authors: Mohammad A. N. M. Shafiqul Karim
Abstract:
As a developing country, Bangladesh has to face numerous challenges. Self Independence in electricity, contributing to climate change by reducing carbon emission and bringing the backward population of society to the mainstream is more challenging for them. Therefore, it is essential to ensure recycled use of local products to the maximum level in every sector. Some private organizations have already worked alongside government to bring the backward population to the mainstream by developing their financial capacities. As rice husk is the largest single category of the total energy supply in Bangladesh. As part of this strategy, rice husk can play a great as a promising renewable energy source, which is readily available, has considerable environmental benefits and can produce electricity and ensure multiple uses of byproducts in construction technology. For the first time in Bangladesh, an experimental multidimensional project depending on Rice Husk Electricity and Rice Husk Ash (RHA) concrete brick/block under Green Eco-Tech Limited has already been started. Project analysis, opportunity, sustainability, the high monitoring component, limitations and finally evaluated data reflecting the viability of establishing more projects using rice husk are discussed in this paper. The by-product of rice husk from the production of green electricity, RHA, can be used for making, in particular, RHA concrete brick/block in Bangladeshi aspects is also discussed here.
Keywords: Project analysis, rice husk, rice husk ash concrete brick/block, compressive strength of rice husk ash concrete brick/block.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20815473 Authenticity of Lipid and Soluble Sugar Profiles of Various Oat Cultivars (Avena sativa)
Authors: Marijana M. Ačanski, Kristian A. Pastor, Djura N. Vujić
Abstract:
The identification of lipid and soluble sugar components in flour samples of different cultivars belonging to common oat species (Avena sativa L.) was performed: spring oat, winter oat and hulless oat. Fatty acids were extracted from flour samples with n-hexane, and derivatized into volatile methyl esters, using TMSH (trimethylsulfonium hydroxide in methanol). Soluble sugars were then extracted from defatted and dried samples of oat flour with 96% ethanol, and further derivatized into corresponding TMS-oximes, using hydroxylamine hydrochloride solution and BSTFA (N,O-bis-(trimethylsilyl)-trifluoroacetamide). The hexane and ethanol extracts of each oat cultivar were analyzed using GC-MS system. Lipid and simple sugar compositions are very similar in all samples of investigated cultivars. Chemometric tool was applied to numeric values of automatically integrated surface areas of detected lipid and simple sugar components in their corresponding derivatized forms. Hierarchical cluster analysis shows a very high similarity between the investigated flour samples of oat cultivars, according to the fatty acid content (0.9955). Moderate similarity was observed according to the content of soluble sugars (0.50). These preliminary results support the idea of establishing methods for oat flour authentication, and provide the means for distinguishing oat flour samples, regardless of the variety, from flour samples made of other cereal species, just by lipid and simple sugar profile analysis.
Keywords: Authentication, chemometrics, GC-MS, lipid and soluble sugar composition, oat cultivars.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13775472 From Risk/Security Analysis via Timespace to a Model of Human Vulnerability and Human Security
Authors: Anders Troedsson
Abstract:
For us humans, risk and insecurity are intimately linked to vulnerabilities - where there is vulnerability, there is potentially risk and insecurity. Reducing vulnerability through compensatory measures means decreasing the likelihood of a certain external event be qualified as a risk/threat/assault, and thus also means increasing the individual’s sense of security. The paper suggests that a meaningful way to approach the study of risk/ insecurity is to organize thinking about the vulnerabilities that external phenomena evoke in humans as perceived by them. Such phenomena are, through a set of given vulnerabilities, potentially translated into perceptions of "insecurity." An ontological discussion about salient timespace characteristics of external phenomena as perceived by humans, including such which potentially can be qualified as risk/threat/assault, leads to the positing of two dimensions which are central for describing what in the paper is called the essence of risk/threat/assault. As is argued, such modeling helps analysis steer free of the subjective factor which is intimately connected to human perception and which mediates between phenomena “out there” potentially identified as risk/threat/assault, and their translation into an experience of security or insecurity. A proposed set of universally given vulnerabilities are scrutinized with the help of the two dimensions, resulting in a modeling effort featuring four realms of vulnerabilities which together represent a dynamic whole. This model in turn informs modeling on human security.
Keywords: Human vulnerabilities, human security, inert-immediate, material-immaterial, timespace.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10625471 VHL, PBRM1 and SETD2 Genes in Kidney Cancer: A Molecular Investigation
Authors: Rozhgar A. Khailany, Mehri Igci, Emine Bayraktar, Sakip Erturhan, Metin Karakok, Ahmet Arslan
Abstract:
Kidney cancer is the most lethal urological cancer accounting for 3% of adult malignancies. VHL, a tumor-suppressor gene, is best known to be associated with renal cell carcinoma (RCC). The VHL functions as negative regulator of hypoxia inducible factors. Recent sequencing efforts have identified several novel frequent mutations of histone modifying and chromatin remodeling genes in ccRCC (clear cell RCC) including PBRM1 and SETD2. The PBRM1 gene encodes the BAF180 protein, which involved in transcriptional activation and repression of selected genes. SETD2 encodes a histone methyltransferase, which may play a role in suppressing tumor development. In this study, RNAs of 30 paired tumor and normal samples that were grouped according to the types of kidney cancer and clinical characteristics of patients, including gender and average age were examined by RT-PCR, SSCP and sequencing techniques. VHL, PBRM1 and SETD2 expressions were relatively down-regulated. However, statistically no significance was found (Wilcoxon signed rank test, p>0.05). Interestingly, no mutation was observed on the contrary of previous studies. Understanding the molecular mechanisms involved in the pathogenesis of RCC has aided the development of molecular-targeted drugs for kidney cancer. Further analysis is required to identify the responsible genes rather than VHL, PBRM1 and SETD2 in kidney cancer.Keywords: Kidney cancer, molecular biomarker, expression analysis, mutation screening.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20155470 Trend Analysis for Extreme Rainfall Events in New South Wales, Australia
Authors: Evan Hajani, Ataur Rahman, Khaled Haddad
Abstract:
Climate change will affect the hydrological cycle in many different ways such as increase in evaporation and rainfalls. There have been growing interests among researchers to identify the nature of trends in historical rainfall data in many different parts of the world. This paper examines the trends in annual maximum rainfall data from 30 stations in New South Wales, Australia by using two non-parametric tests, Mann-Kendall (MK) and Spearman’s Rho (SR). Rainfall data were analyzed for fifteen different durations ranging from 6 min to 3 days. It is found that the sub-hourly durations (6, 12, 18, 24, 30 and 48 minutes) show statistically significant positive (upward) trends whereas longer duration (subdaily and daily) events generally show a statistically significant negative (downward) trend. It is also found that the MK test and SR test provide notably different results for some rainfall event durations considered in this study. Since shorter duration sub-hourly rainfall events show positive trends at many stations, the design rainfall data based on stationary frequency analysis for these durations need to be adjusted to account for the impact of climate change. These shorter durations are more relevant to many urban development projects based on smaller catchments having a much shorter response time.
Keywords: Climate change, Mann-Kendall test, Spearman’s Rho test, trends, design rainfall.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29155469 Multistage Data Envelopment Analysis Model for Malmquist Productivity Index Using Grey's System Theory to Evaluate Performance of Electric Power Supply Chain in Iran
Authors: Mesbaholdin Salami, Farzad Movahedi Sobhani, Mohammad Sadegh Ghazizadeh
Abstract:
Evaluation of organizational performance is among the most important measures that help organizations and entities continuously improve their efficiency. Organizations can use the existing data and results from the comparison of units under investigation to obtain an estimation of their performance. The Malmquist Productivity Index (MPI) is an important index in the evaluation of overall productivity, which considers technological developments and technical efficiency at the same time. This article proposed a model based on the multistage MPI, considering limited data (Grey’s theory). This model can evaluate the performance of units using limited and uncertain data in a multistage process. It was applied by the electricity market manager to Iran’s electric power supply chain (EPSC), which contains uncertain data, to evaluate the performance of its actors. Results from solving the model showed an improvement in the accuracy of future performance of the units under investigation, using the Grey’s system theory. This model can be used in all case studies, in which MPI is used and there are limited or uncertain data.
Keywords: Malmquist Index, Grey's Theory, Charnes Cooper & Rhodes (CCR) Model, network data envelopment analysis, Iran electricity power chain.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5575468 Urban Greenery in the Greatest Polish Cities: Analysis of Spatial Concentration
Authors: Elżbieta Antczak
Abstract:
Cities offer important opportunities for economic development and for expanding access to basic services, including health care and education, for large numbers of people. Moreover, green areas (as an integral part of sustainable urban development) present a major opportunity for improving urban environments, quality of lives and livelihoods. This paper examines, using spatial concentration and spatial taxonomic measures, regional diversification of greenery in the cities of Poland. The analysis includes location quotients, Lorenz curve, Locational Gini Index, and the synthetic index of greenery and spatial statistics tools: (1) To verify the occurrence of strong concentration or dispersion of the phenomenon in time and space depending on the variable category, and, (2) To study if the level of greenery depends on the spatial autocorrelation. The data includes the greatest Polish cities, categories of the urban greenery (parks, lawns, street greenery, and green areas on housing estates, cemeteries, and forests) and the time span 2004-2015. According to the obtained estimations, most of cites in Poland are already taking measures to become greener. However, in the country there are still many barriers to well-balanced urban greenery development (e.g. uncontrolled urban sprawl, poor management as well as lack of spatial urban planning systems).
Keywords: Greenery, urban areas, regional spatial diversification and concentration, spatial taxonomic measure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12605467 A Corpus-Based Analysis on Code-Mixing Features in Mandarin-English Bilingual Children in Singapore
Authors: Xunan Huang, Caicai Zhang
Abstract:
This paper investigated the code-mixing features in Mandarin-English bilingual children in Singapore. First, it examined whether the code-mixing rate was different in Mandarin Chinese and English contexts. Second, it explored the syntactic categories of code-mixing in Singapore bilingual children. Moreover, this study investigated whether morphological information was preserved when inserting syntactic components into the matrix language. Data are derived from the Singapore Bilingual Corpus, in which the recordings and transcriptions of sixty English-Mandarin 5-to-6-year-old children were preserved for analysis. Results indicated that the rate of code-mixing was asymmetrical in the two language contexts, with the rate being significantly higher in the Mandarin context than that in the English context. The asymmetry is related to language dominance in that children are more likely to code-mix when using their nondominant language. Concerning the syntactic categories of code-mixing words in the Singaporean bilingual children, we found that noun-mixing, verb-mixing, and adjective-mixing are the three most frequently used categories in code-mixing in the Mandarin context. This pattern mirrors the syntactic categories of code-mixing in the Cantonese context in Cantonese-English bilingual children, and the general trend observed in lexical borrowing. Third, our results also indicated that English vocabularies that carry morphological information are embedded in bare forms in the Mandarin context. These findings shed light upon how bilingual children take advantage of the two languages in mixed utterances in a bilingual environment.
Keywords: Code-mixing, Mandarin Chinese, English, bilingual children.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11245466 Educational Knowledge Transfer in Indigenous Mexican Areas Using Cloud Computing
Authors: L. R. Valencia Pérez, J. M. Peña Aguilar, A. Lamadrid Álvarez, A. Pastrana Palma, H. F. Valencia Pérez, M. Vivanco Vargas
Abstract:
This work proposes a Cooperation-Competitive (Coopetitive) approach that allows coordinated work among the Secretary of Public Education (SEP), the Autonomous University of Querétaro (UAQ) and government funds from National Council for Science and Technology (CONACYT) or some other international organizations. To work on an overall knowledge transfer strategy with e-learning over the Cloud, where experts in junior high and high school education, working in multidisciplinary teams, perform analysis, evaluation, design, production, validation and knowledge transfer at large scale using a Cloud Computing platform. Allowing teachers and students to have all the information required to ensure a homologated nationally knowledge of topics such as mathematics, statistics, chemistry, history, ethics, civism, etc. This work will start with a pilot test in Spanish and initially in two regional dialects Otomí and Náhuatl. Otomí has more than 285,000 speaking indigenes in Queretaro and Mexico´s central region. Náhuatl is number one indigenous dialect spoken in Mexico with more than 1,550,000 indigenes. The phase one of the project takes into account negotiations with indigenous tribes from different regions, and the Information and Communication technologies to deliver the knowledge to the indigenous schools in their native dialect. The methodology includes the following main milestones: Identification of the indigenous areas where Otomí and Náhuatl are the spoken dialects, research with the SEP the location of actual indigenous schools, analysis and inventory or current schools conditions, negotiation with tribe chiefs, analysis of the technological communication requirements to reach the indigenous communities, identification and inventory of local teachers technology knowledge, selection of a pilot topic, analysis of actual student competence with traditional education system, identification of local translators, design of the e-learning platform, design of the multimedia resources and storage strategy for “Cloud Computing”, translation of the topic to both dialects, Indigenous teachers training, pilot test, course release, project follow up, analysis of student requirements for the new technological platform, definition of a new and improved proposal with greater reach in topics and regions. Importance of phase one of the project is multiple, it includes the proposal of a working technological scheme, focusing in the cultural impact in Mexico so that indigenous tribes can improve their knowledge about new forms of crop improvement, home storage technologies, proven home remedies for common diseases, ways of preparing foods containing major nutrients, disclose strengths and weaknesses of each region, communicating through cloud computing platforms offering regional products and opening communication spaces for inter-indigenous cultural exchange.
Keywords: Mexicans indigenous tribes, education, knowledge transfer, cloud computing, Otomí, Náhuatl, language.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9115465 Spectral Amplitude Coding Optical CDMA: Performance Analysis of PIIN Reduction Using VC Code Family
Authors: Hassan Yousif Ahmed, Ibrahima Faye, N.M.Saad, S.A. Aljined
Abstract:
Multi-user interference (MUI) is the main reason of system deterioration in the Spectral Amplitude Coding Optical Code Division Multiple Access (SAC-OCDMA) system. MUI increases with the number of simultaneous users, resulting into higher probability bit rate and limits the maximum number of simultaneous users. On the other hand, Phase induced intensity noise (PIIN) problem which is originated from spontaneous emission of broad band source from MUI severely limits the system performance should be addressed as well. Since the MUI is caused by the interference of simultaneous users, reducing the MUI value as small as possible is desirable. In this paper, an extensive study for the system performance specified by MUI and PIIN reducing is examined. Vectors Combinatorial (VC) codes families are adopted as a signature sequence for the performance analysis and a comparison with reported codes is performed. The results show that, when the received power increases, the PIIN noise for all the codes increases linearly. The results also show that the effect of PIIN can be minimized by increasing the code weight leads to preserve adequate signal to noise ratio over bit error probability. A comparison study between the proposed code and the existing codes such as Modified frequency hopping (MFH), Modified Quadratic- Congruence (MQC) has been carried out.
Keywords: FBG, MUI, PIIN, SAC-OCDMA, VCC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22155464 Error Rate Probability for Coded MQAM with MRC Diversity in the Presence of Cochannel Interferers over Nakagami-Fading Channels
Authors: J.S. Ubhi, M.S. Patterh, T.S. Kamal
Abstract:
Exact expressions for bit-error probability (BEP) for coherent square detection of uncoded and coded M-ary quadrature amplitude modulation (MQAM) using an array of antennas with maximal ratio combining (MRC) in a flat fading channel interference limited system in a Nakagami-m fading environment is derived. The analysis assumes an arbitrary number of independent and identically distributed Nakagami interferers. The results for coded MQAM are computed numerically for the case of (24,12) extended Golay code and compared with uncoded MQAM by plotting error probabilities versus average signal-to-interference ratio (SIR) for various values of order of diversity N, number of distinct symbols M, in order to examine the effect of cochannel interferers on the performance of the digital communication system. The diversity gains and net gains are also presented in tabular form in order to examine the performance of digital communication system in the presence of interferers, as the order of diversity increases. The analytical results presented in this paper are expected to provide useful information needed for design and analysis of digital communication systems with space diversity in wireless fading channels.Keywords: Cochannel interference, maximal ratio combining, Nakagami-m fading, wireless digital communications.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18585463 Finite Volume Method for Flow Prediction Using Unstructured Meshes
Authors: Juhee Lee, Yongjun Lee
Abstract:
In designing a low-energy-consuming buildings, the heat transfer through a large glass or wall becomes critical. Multiple layers of the window glasses and walls are employed for the high insulation. The gravity driven air flow between window glasses or wall layers is a natural heat convection phenomenon being a key of the heat transfer. For the first step of the natural heat transfer analysis, in this study the development and application of a finite volume method for the numerical computation of viscous incompressible flows is presented. It will become a part of the natural convection analysis with high-order scheme, multi-grid method, and dual-time step in the future. A finite volume method based on a fully-implicit second-order is used to discretize and solve the fluid flow on unstructured grids composed of arbitrary-shaped cells. The integrations of the governing equation are discretised in the finite volume manner using a collocated arrangement of variables. The convergence of the SIMPLE segregated algorithm for the solution of the coupled nonlinear algebraic equations is accelerated by using a sparse matrix solver such as BiCGSTAB. The method used in the present study is verified by applying it to some flows for which either the numerical solution is known or the solution can be obtained using another numerical technique available in the other researches. The accuracy of the method is assessed through the grid refinement.
Keywords: Finite volume method, fluid flow, laminar flow, unstructured grid.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1853