Search results for: travel time estimation
17625 Numerical Computation of Specific Absorption Rate and Induced Current for Workers Exposed to Static Magnetic Fields of MRI Scanners
Authors: Sherine Farrag
Abstract:
Currently-used MRI scanners in Cairo City possess static magnetic field (SMF) that varies from 0.25 up to 3T. More than half of them possess SMF of 1.5T. The SMF of the magnet determine the diagnostic power of a scanner, but not worker's exposure profile. This research paper presents an approach for numerical computation of induced electric fields and SAR values by estimation of fringe static magnetic fields. Iso-gauss line of MR was mapped and a polynomial function of the 7th degree was generated and tested. Induced current field due to worker motion in the SMF and SAR values for organs and tissues have been calculated. Results illustrate that the computation tool used permits quick accurate MRI iso-gauss mapping and calculation of SAR values which can then be used for assessment of occupational exposure profile of MRI operators.Keywords: MRI occupational exposure, MRI safety, induced current density, specific absorption rate, static magnetic fields
Procedia PDF Downloads 43017624 Fast Short-Term Electrical Load Forecasting under High Meteorological Variability with a Multiple Equation Time Series Approach
Authors: Charline David, Alexandre Blondin Massé, Arnaud Zinflou
Abstract:
In 2016, Clements, Hurn, and Li proposed a multiple equation time series approach for the short-term load forecasting, reporting an average mean absolute percentage error (MAPE) of 1.36% on an 11-years dataset for the Queensland region in Australia. We present an adaptation of their model to the electrical power load consumption for the whole Quebec province in Canada. More precisely, we take into account two additional meteorological variables — cloudiness and wind speed — on top of temperature, as well as the use of multiple meteorological measurements taken at different locations on the territory. We also consider other minor improvements. Our final model shows an average MAPE score of 1:79% over an 8-years dataset.Keywords: short-term load forecasting, special days, time series, multiple equations, parallelization, clustering
Procedia PDF Downloads 10317623 Assessing Project Performance through Work Sampling and Earned Value Analysis
Authors: Shobha Ramalingam
Abstract:
The majority of the infrastructure projects are affected by time overrun, resulting in project delays and subsequently cost overruns. Time overrun may vary from a few months to as high as five or more years, placing the project viability at risk. One of the probable reasons noted in the literature for this outcome in projects is due to poor productivity. Researchers contend that productivity in construction has only marginally increased over the years. While studies in the literature have extensively focused on time and cost parameters in projects, there are limited studies that integrate time and cost with productivity to assess project performance. To this end, a study was conducted to understand the project delay factors concerning cost, time and productivity. A case-study approach was adopted to collect rich data from a nuclear power plant project site for two months through observation, interviews and document review. The data were analyzed using three different approaches for a comprehensive understanding. Foremost, a root-cause analysis was performed on the data using Ishikawa’s fish-bone diagram technique to identify the various factors impacting the delay concerning time. Based on it, a questionnaire was designed and circulated to concerned executives, including project engineers and contractors to determine the frequency of occurrence of the delay, which was then compiled and presented to the management for a possible solution to mitigate. Second, a productivity analysis was performed on select activities, including rebar bending and concreting through a time-motion study to analyze product performance. Third, data on cost of construction for three years allowed analyzing the cost performance using earned value management technique. All three techniques allowed to systematically and comprehensively identify the key factors that deter project performance and productivity loss in the construction of the nuclear power plant project. The findings showed that improper planning and coordination between multiple trades, concurrent operations, improper workforce and material management, fatigue due to overtime were some of the key factors that led to delays and poor productivity. The findings are expected to act as a stepping stone for further research and have implications for practitioners.Keywords: earned value analysis, time performance, project costs, project delays, construction productivity
Procedia PDF Downloads 9717622 Regionalization of IDF Curves, by Interpolating Intensity and Adjustment Parameters - Application to Boyacá, Colombia
Authors: Pedro Mauricio Acosta, Carlos Andrés Caro
Abstract:
This research presents the regionalization of IDF curves for the department of Boyacá, Colombia, which comprises 16 towns, including the provincial capital, Tunja. For regionalization adjustment parameters (U and alpha) of the IDF curves stations referred to in the studied area were used. Similar regionalization is used by the interpolation of intensities. In the case of regionalization by parameters found by the construction of the curves intensity, duration and frequency estimation methods using ordinary moments and maximum likelihood. Regionalization and interpolation of data were performed with the assistance of Arcgis software. Within the development of the project the best choice to provide a level of reliability such as to determine which of the options and ways to regionalize is best sought. The resulting isolines maps were made in the case of regionalization intensities, each map is associated with a different return period and duration in order to build IDF curves in the studied area. In the case of the regionalization maps parameters associated with each parameter were performed last.Keywords: intensity duration, frequency curves, regionalization, hydrology
Procedia PDF Downloads 32517621 Impact of Digitized Monitoring & Evaluation System in Technical Vocational Education and Training
Authors: Abdul Ghani Rajput
Abstract:
Although monitoring and evaluation concept adopted by Technical Vocational Education and Training (TVET) organization to track the progress over the continuous interval of time based on planned interventions and subsequently, evaluating it for the impact, quality assurance and sustainability. In digital world, TVET providers are giving preference to have real time information to do monitoring of training activities. Identifying the benefits and challenges of digitized monitoring & evaluation real time information system has not been sufficiently tackled in this date. This research paper looks at the impact of digitized M&E in TVET sector by analyzing two case studies and describe the benefits and challenges of using digitized M&E system. Finally, digitized M&E have been identified as carriers for high potential of TVET sector.Keywords: digitized M&E, innovation, quality assurance, TVET
Procedia PDF Downloads 23017620 Short-Term Operation Planning for Energy Management of Exhibition Hall
Authors: Yooncheol Lee, Jeongmin Kim, Kwang Ryel Ryu
Abstract:
This paper deals with the establishment of a short-term operational plan for an air conditioner for efficient energy management of exhibition hall. The short-term operational plan is composed of a time series of operational schedules, which we have searched using genetic algorithms. Establishing operational schedule should be considered the future trends of the variables affecting the exhibition hall environment. To reflect continuously changing factors such as external temperature and occupant, short-term operational plans should be updated in real time. But it takes too much time to evaluate a short-term operational plan using EnergyPlus, a building emulation tool. For that reason, it is difficult to update the operational plan in real time. To evaluate the short-term operational plan, we designed prediction models based on machine learning with fast evaluation speed. This model, which was created by learning the past operational data, is accurate and fast. The collection of operational data and the verification of operational plans were made using EnergyPlus. Experimental results show that the proposed method can save energy compared to the reactive control method.Keywords: exhibition hall, energy management, predictive model, simulation-based optimization
Procedia PDF Downloads 33917619 New Machine Learning Optimization Approach Based on Input Variables Disposition Applied for Time Series Prediction
Authors: Hervice Roméo Fogno Fotsoa, Germaine Djuidje Kenmoe, Claude Vidal Aloyem Kazé
Abstract:
One of the main applications of machine learning is the prediction of time series. But a more accurate prediction requires a more optimal model of machine learning. Several optimization techniques have been developed, but without considering the input variables disposition of the system. Thus, this work aims to present a new machine learning architecture optimization technique based on their optimal input variables disposition. The validations are done on the prediction of wind time series, using data collected in Cameroon. The number of possible dispositions with four input variables is determined, i.e., twenty-four. Each of the dispositions is used to perform the prediction, with the main criteria being the training and prediction performances. The results obtained from a static architecture and a dynamic architecture of neural networks have shown that these performances are a function of the input variable's disposition, and this is in a different way from the architectures. This analysis revealed that it is necessary to take into account the input variable's disposition for the development of a more optimal neural network model. Thus, a new neural network training algorithm is proposed by introducing the search for the optimal input variables disposition in the traditional back-propagation algorithm. The results of the application of this new optimization approach on the two single neural network architectures are compared with the previously obtained results step by step. Moreover, this proposed approach is validated in a collaborative optimization method with a single objective optimization technique, i.e., genetic algorithm back-propagation neural networks. From these comparisons, it is concluded that each proposed model outperforms its traditional model in terms of training and prediction performance of time series. Thus the proposed optimization approach can be useful in improving the accuracy of time series forecasts. This proves that the proposed optimization approach can be useful in improving the accuracy of time series prediction based on machine learning.Keywords: input variable disposition, machine learning, optimization, performance, time series prediction
Procedia PDF Downloads 10917618 A Multi-Release Software Reliability Growth Models Incorporating Imperfect Debugging and Change-Point under the Simulated Testing Environment and Software Release Time
Authors: Sujit Kumar Pradhan, Anil Kumar, Vijay Kumar
Abstract:
The testing process of the software during the software development time is a crucial step as it makes the software more efficient and dependable. To estimate software’s reliability through the mean value function, many software reliability growth models (SRGMs) were developed under the assumption that operating and testing environments are the same. Practically, it is not true because when the software works in a natural field environment, the reliability of the software differs. This article discussed an SRGM comprising change-point and imperfect debugging in a simulated testing environment. Later on, we extended it in a multi-release direction. Initially, the software was released to the market with few features. According to the market’s demand, the software company upgraded the current version by adding new features as time passed. Therefore, we have proposed a generalized multi-release SRGM where change-point and imperfect debugging concepts have been addressed in a simulated testing environment. The failure-increasing rate concept has been adopted to determine the change point for each software release. Based on nine goodness-of-fit criteria, the proposed model is validated on two real datasets. The results demonstrate that the proposed model fits the datasets better. We have also discussed the optimal release time of the software through a cost model by assuming that the testing and debugging costs are time-dependent.Keywords: software reliability growth models, non-homogeneous Poisson process, multi-release software, mean value function, change-point, environmental factors
Procedia PDF Downloads 7417617 Spatial Econometric Approaches for Count Data: An Overview and New Directions
Authors: Paula Simões, Isabel Natário
Abstract:
This paper reviews a number of theoretical aspects for implementing an explicit spatial perspective in econometrics for modelling non-continuous data, in general, and count data, in particular. It provides an overview of the several spatial econometric approaches that are available to model data that are collected with reference to location in space, from the classical spatial econometrics approaches to the recent developments on spatial econometrics to model count data, in a Bayesian hierarchical setting. Considerable attention is paid to the inferential framework, necessary for structural consistent spatial econometric count models, incorporating spatial lag autocorrelation, to the corresponding estimation and testing procedures for different assumptions, to the constrains and implications embedded in the various specifications in the literature. This review combines insights from the classical spatial econometrics literature as well as from hierarchical modeling and analysis of spatial data, in order to look for new possible directions on the processing of count data, in a spatial hierarchical Bayesian econometric context.Keywords: spatial data analysis, spatial econometrics, Bayesian hierarchical models, count data
Procedia PDF Downloads 59317616 A Multi-Criteria Model for Scheduling of Stochastic Single Machine Problem with Outsourcing and Solving It through Application of Chance Constrained
Authors: Homa Ghave, Parmis Shahmaleki
Abstract:
This paper presents a new multi-criteria stochastic mathematical model for a single machine scheduling with outsourcing allowed. There are multiple jobs processing in batch. For each batch, all of job or a quantity of it can be outsourced. The jobs have stochastic processing time and lead time and deterministic due dates arrive randomly. Because of the stochastic inherent of processing time and lead time, we use the chance constrained programming for modeling the problem. First, the problem is formulated in form of stochastic programming and then prepared in a form of deterministic mixed integer linear programming. The objectives are considered in the model to minimize the maximum tardiness and outsourcing cost simultaneously. Several procedures have been developed to deal with the multi-criteria problem. In this paper, we utilize the concept of satisfaction functions to increases the manager’s preference. The proposed approach is tested on instances where the random variables are normally distributed.Keywords: single machine scheduling, multi-criteria mathematical model, outsourcing strategy, uncertain lead times and processing times, chance constrained programming, satisfaction function
Procedia PDF Downloads 26417615 The Effect of Extremely Low Frequency Magnetic Field on Rats Brain
Authors: Omar Abdalla, Abdelfatah Ahmed, Ahmed Mustafa, Abdelazem Eldouma
Abstract:
The purpose of this study is evaluating the effect of extremely low frequency magnetic field on Waster rats brain. The number of rats used in this study were 25, which were divided into five groups, each group containing five rats as follows: Group 1: The control group which was not exposed to energized field; Group 2: Rats were exposed to a magnetic field with an intensity of 0.6 mT (2 hours/day); Group 3: Rats were exposed to a magnetic field of 1.2 mT (2 hours/day); Group4: Rats were exposed to a magnetic field of 1.8 mT (2 hours/day); Group 5: Rats were exposed to a magnetic field of 2.4 mT (2 hours/day) and all groups were exposed for seven days, by designing a maze and calculating the time average for arriving to the decoy at special conditions. We found the time average before exposure for the all groups was G2=330 s, G3=172 s, G4=500 s and G5=174 s, respectively. We exposed all groups to ELF-MF and measured the time and we found: G2=465 s, G3=388 s, G4=501 s, and G5=442 s. It was observed that the time average increased directly with field strength. Histological samples of frontal lop of brain for all groups were taken and we found lesion, atrophy, empty vacuoles and disorder choroid plexus at frontal lope of brain. And finally we observed the disorder of choroid plexus in histological results and Alzheimer's symptoms increase when the magnetic field increases.Keywords: nonionizing radiation, biophysics, magnetic field, shrinkage
Procedia PDF Downloads 54517614 Effect of Preloading on Long-Term Settlement of Closed Landfills: A Numerical Analysis
Authors: Mehrnaz Alibeikloo, Hajar Share Isfahani, Hadi Khabbaz
Abstract:
In recent years, by developing cities and increasing population, reconstructing on closed landfill sites in some regions is unavoidable. Long-term settlement is one of the major concerns associated with reconstruction on landfills after closure. The purpose of this research is evaluating the effect of preloading in various patterns of height and time on long-term settlements of closed landfills. In this regard, five scenarios of surcharge from 1 to 3 m high within 3, 4.5 and 6 months of preloading time have been modeled using PLAXIS 2D software. Moreover, the numerical results have been compared to those obtained from analytical methods, and a good agreement has been achieved. The findings indicate that there is a linear relationship between settlement and surcharge height. Although, long-term settlement decreased by applying a longer and higher preloading, the time of preloading was found to be a more effective factor compared to preloading height.Keywords: preloading, long-term settlement, landfill, PLAXIS 2D
Procedia PDF Downloads 19517613 Noise Mitigation Techniques to Minimize Electromagnetic Interference/Electrostatic Discharge Effects for the Lunar Mission Spacecraft
Authors: Vabya Kumar Pandit, Mudit Mittal, N. Prahlad Rao, Ramnath Babu
Abstract:
TeamIndus is the only Indian team competing for the Google Lunar XPRIZE(GLXP). The GLXP is a global competition to challenge the private entities to soft land a rover on the moon, travel minimum 500 meters and transmit high definition images and videos to Earth. Towards this goal, the TeamIndus strategy is to design and developed lunar lander that will deliver a rover onto the surface of the moon which will accomplish GLXP mission objectives. This paper showcases the various system level noise control techniques adopted by Electrical Distribution System (EDS), to achieve the required Electromagnetic Compatibility (EMC) of the spacecraft. The design guidelines followed to control Electromagnetic Interference by proper electronic package design, grounding, shielding, filtering, and cable routing within the stipulated mass budget, are explained. The paper also deals with the challenges of achieving Electromagnetic Cleanliness in presence of various Commercial Off-The-Shelf (COTS) and In-House developed components. The methods of minimizing Electrostatic Discharge (ESD) by identifying the potential noise sources, susceptible areas for charge accumulation and the methodology to prevent arcing inside spacecraft are explained. The paper then provides the EMC requirements matrix derived from the mission requirements to meet the overall Electromagnetic compatibility of the Spacecraft.Keywords: electromagnetic compatibility, electrostatic discharge, electrical distribution systems, grounding schemes, light weight harnessing
Procedia PDF Downloads 29317612 Kemmer Oscillator in Cosmic String Background
Authors: N. Messai, A. Boumali
Abstract:
In this work, we aim to solve the two dimensional Kemmer equation including Dirac oscillator interaction term, in the background space-time generated by a cosmic string which is submitted to an uniform magnetic field. Eigenfunctions and eigenvalues of our problem have been found and the influence of the cosmic string space-time on the energy spectrum has been analyzed.Keywords: Kemmer oscillator, cosmic string, Dirac oscillator, eigenfunctions
Procedia PDF Downloads 58417611 Effects of Boiling Temperature and Time on Colour, Texture and Sensory Properties of Volutharpa ampullacea perryi Meat
Authors: Xianbao Sun, Jinlong Zhao, Shudong He, Jing Li
Abstract:
Volutharpa ampullacea perryi is a high-protein marine shellfish. However, few data are available on the effects of boiling temperatures and time on quality of the meat. In this study, colour, texture and sensory characteristics of Volutharpa ampullacea perryi meat during the boiling cooking processes (75-100 °C, 5-60 min) were investigated by colors analysis, texture profile analysis (TPA), scanning electron microscope (SEM) and sensory evaluation. The ratio of cooking loss gradually increased with the increase of temperature and time. The colour of meat became lighter and more yellower from 85 °C to 95 °C in a short time (5-20 min), but it became brown after a 30 min treatment. TPA results showed that the Volutharpa ampullacea perryi meat were more firm and less cohesive after a higher temperature (95-100 °C) treatment even in a short period (5-15 min). Based on the SEM analysis, it was easily found that the myofibrils structure was destroyed at a higher temperature (85-100 °C). Sensory data revealed that the meat cooked at 85-90 °C in 10-20 min showed higher scores in overall acceptance, as well as color, hardness and taste. Based on these results, it could be constructed that Volutharpa ampullacea perryi meat should be heated on a suitable condition (such as 85 °C 15 min or 90 °C 10 min) in the boiling cooking to be ensure a better acceptability.Keywords: Volutharpa ampullacea perryi meat, boiling cooking, colour, sensory, texture
Procedia PDF Downloads 28117610 Improving 99mTc-tetrofosmin Myocardial Perfusion Images by Time Subtraction Technique
Authors: Yasuyuki Takahashi, Hayato Ishimura, Masao Miyagawa, Teruhito Mochizuki
Abstract:
Quantitative measurement of myocardium perfusion is possible with single photon emission computed tomography (SPECT) using a semiconductor detector. However, accumulation of 99mTc-tetrofosmin in the liver may make it difficult to assess that accurately in the inferior myocardium. Our idea is to reduce the high accumulation in the liver by using dynamic SPECT imaging and a technique called time subtraction. We evaluated the performance of a new SPECT system with a cadmium-zinc-telluride solid-state semi- conductor detector (Discovery NM 530c; GE Healthcare). Our system acquired list-mode raw data over 10 minutes for a typical patient. From the data, ten SPECT images were reconstructed, one for every minute of acquired data. Reconstruction with the semiconductor detector was based on an implementation of a 3-D iterative Bayesian reconstruction algorithm. We studied 20 patients with coronary artery disease (mean age 75.4 ± 12.1 years; range 42-86; 16 males and 4 females). In each subject, 259 MBq of 99mTc-tetrofosmin was injected intravenously. We performed both a phantom and a clinical study using dynamic SPECT. An approximation to a liver-only image is obtained by reconstructing an image from the early projections during which time the liver accumulation dominates (0.5~2.5 minutes SPECT image-5~10 minutes SPECT image). The extracted liver-only image is then subtracted from a later SPECT image that shows both the liver and the myocardial uptake (5~10 minutes SPECT image-liver-only image). The time subtraction of liver was possible in both a phantom and the clinical study. The visualization of the inferior myocardium was improved. In past reports, higher accumulation in the myocardium due to the overlap of the liver is un-diagnosable. Using our time subtraction method, the image quality of the 99mTc-tetorofosmin myocardial SPECT image is considerably improved.Keywords: 99mTc-tetrofosmin, dynamic SPECT, time subtraction, semiconductor detector
Procedia PDF Downloads 33517609 Two Efficient Heuristic Algorithms for the Integrated Production Planning and Warehouse Layout Problem
Authors: Mohammad Pourmohammadi Fallah, Maziar Salahi
Abstract:
In the literature, a mixed-integer linear programming model for the integrated production planning and warehouse layout problem is proposed. To solve the model, the authors proposed a Lagrangian relax-and-fix heuristic that takes a significant amount of time to stop with gaps above 5$\%$ for large-scale instances. Here, we present two heuristic algorithms to solve the problem. In the first one, we use a greedy approach by allocating warehouse locations with less reservation costs and also less transportation costs from the production area to locations and from locations to the output point to items with higher demands. Then a smaller model is solved. In the second heuristic, first, we sort items in descending order according to the fraction of the sum of the demands for that item in the time horizon plus the maximum demand for that item in the time horizon and the sum of all its demands in the time horizon. Then we categorize the sorted items into groups of 3, 4, or 5 and solve a small-scale optimization problem for each group, hoping to improve the solution of the first heuristic. Our preliminary numerical results show the effectiveness of the proposed heuristics.Keywords: capacitated lot-sizing, warehouse layout, mixed-integer linear programming, heuristics algorithm
Procedia PDF Downloads 19617608 Quick Covering Machine for Grain Drying Pavement
Authors: Fatima S. Rodriguez, Victorino T. Taylan, Manolito C. Bulaong, Helen F. Gavino, Vitaliana U. Malamug
Abstract:
In sundrying, the quality of the grains are greatly reduced when paddy grains were caught by the rain unsacked and unstored resulting to reduced profit. The objectives of this study were to design and fabricate a quick covering machine for grain drying pavement to test and evaluate the operating characteristics of the machine according to its deployment speed, recovery speed, deployment time, recovery time, power consumption, aesthetics of laminated sack, conducting partial budget, and cost curve analysis. The machine was able to cover the grains in a 12.8 m x 22.5 m grain drying pavement at an average time of 17.13 s. It consumed 0 .53 W-hr for the deployment and recovery of the cover. The machine entailed an investment cost of $1,344.40 and an annual cost charge of $647.32. Moreover, the savings per year using the quick covering machine was $101.83.Keywords: quick, covering machine, grain, drying pavement
Procedia PDF Downloads 37317607 Combination between Intrusion Systems and Honeypots
Authors: Majed Sanan, Mohammad Rammal, Wassim Rammal
Abstract:
Today, security is a major concern. Intrusion Detection, Prevention Systems and Honeypot can be used to moderate attacks. Many researchers have proposed to use many IDSs ((Intrusion Detection System) time to time. Some of these IDS’s combine their features of two or more IDSs which are called Hybrid Intrusion Detection Systems. Most of the researchers combine the features of Signature based detection methodology and Anomaly based detection methodology. For a signature based IDS, if an attacker attacks slowly and in organized way, the attack may go undetected through the IDS, as signatures include factors based on duration of the events but the actions of attacker do not match. Sometimes, for an unknown attack there is no signature updated or an attacker attack in the mean time when the database is updating. Thus, signature-based IDS fail to detect unknown attacks. Anomaly based IDS suffer from many false-positive readings. So there is a need to hybridize those IDS which can overcome the shortcomings of each other. In this paper we propose a new approach to IDS (Intrusion Detection System) which is more efficient than the traditional IDS (Intrusion Detection System). The IDS is based on Honeypot Technology and Anomaly based Detection Methodology. We have designed Architecture for the IDS in a packet tracer and then implemented it in real time. We have discussed experimental results performed: both the Honeypot and Anomaly based IDS have some shortcomings but if we hybridized these two technologies, the newly proposed Hybrid Intrusion Detection System (HIDS) is capable enough to overcome these shortcomings with much enhanced performance. In this paper, we present a modified Hybrid Intrusion Detection System (HIDS) that combines the positive features of two different detection methodologies - Honeypot methodology and anomaly based intrusion detection methodology. In the experiment, we ran both the Intrusion Detection System individually first and then together and recorded the data from time to time. From the data we can conclude that the resulting IDS are much better in detecting intrusions from the existing IDSs.Keywords: security, intrusion detection, intrusion prevention, honeypot, anomaly-based detection, signature-based detection, cloud computing, kfsensor
Procedia PDF Downloads 38217606 A Time-Varying and Non-Stationary Convolution Spectral Mixture Kernel for Gaussian Process
Authors: Kai Chen, Shuguang Cui, Feng Yin
Abstract:
Gaussian process (GP) with spectral mixture (SM) kernel demonstrates flexible non-parametric Bayesian learning ability in modeling unknown function. In this work a novel time-varying and non-stationary convolution spectral mixture (TN-CSM) kernel with a significant enhancing of interpretability by using process convolution is introduced. A way decomposing the SM component into an auto-convolution of base SM component and parameterizing it to be input dependent is outlined. Smoothly, performing a convolution between two base SM component yields a novel structure of non-stationary SM component with much better generalized expression and interpretation. The TN-CSM perfectly allows compatibility with the stationary SM kernel in terms of kernel form and spectral base ignored and confused by previous non-stationary kernels. On synthetic and real-world datatsets, experiments show the time-varying characteristics of hyper-parameters in TN-CSM and compare the learning performance of TN-CSM with popular and representative non-stationary GP.Keywords: Gaussian process, spectral mixture, non-stationary, convolution
Procedia PDF Downloads 19617605 Analysis of Silicon Controlled Rectifier-Based Electrostatic Discharge Protection Circuits with Electrical Characteristics for the 5V Power Clamp
Authors: Jun-Geol Park, Kyoung-Il Do, Min-Ju Kwon, Kyung-Hyun Park, Yong-Seo Koo
Abstract:
This paper analyzed the SCR (Silicon Controlled Rectifier)-based ESD (Electrostatic Discharge) protection circuits with the turn-on time characteristics. The structures are the LVTSCR (Low Voltage Triggered SCR), the ZTSCR (Zener Triggered SCR) and the PTSCR (P-Substrate Triggered SCR). The three structures are for the 5V power clamp. In general, the structures with the low trigger voltage structure can have the fast turn-on characteristics than other structures. All the ESD protection circuits have the low trigger voltage by using the N+ bridge region of LVTSCR, by using the zener diode structure of ZTSCR, by increasing the trigger current of PTSCR. The simulation for the comparison with the turn-on time was conducted by the Synopsys TCAD simulator. As the simulation results, the LVTSCR has the turn-on time of 2.8 ns, ZTSCR of 2.1 ns and the PTSCR of 2.4 ns. The HBM simulation results, however, show that the PTSCR is the more robust structure of 430K in HBM 8kV standard than 450K of LVTSCR and 495K of ZTSCR. Therefore the PTSCR is the most effective ESD protection circuit for the 5V power clamp.Keywords: ESD, SCR, turn-on time, trigger voltage, power clamp
Procedia PDF Downloads 34817604 Redefining Solar Generation Estimation: A Comprehensive Analysis of Real Utility Advanced Metering Infrastructure (AMI) Data from Various Projects in New York
Authors: Haowei Lu, Anaya Aaron
Abstract:
Understanding historical solar generation and forecasting future solar generation from interconnected Distributed Energy Resources (DER) is crucial for utility planning and interconnection studies. The existing methodology, which relies on solar radiation, weather data, and common inverter models, is becoming less accurate. Rapid advancements in DER technologies have resulted in more diverse project sites, deviating from common patterns due to various factors such as DC/AC ratio, solar panel performance, tilt angle, and the presence of DC-coupled battery energy storage systems. In this paper, the authors review 10,000 DER projects within the system and analyze the Advanced Metering Infrastructure (AMI) data for various types to demonstrate the impact of different parameters. An updated methodology is proposed for redefining historical and future solar generation in distribution feeders.Keywords: photovoltaic system, solar energy, fluctuations, energy storage, uncertainty
Procedia PDF Downloads 3217603 Assessment of Neurodevelopmental Needs in Duchenne Muscular Dystrophy
Authors: Mathula Thangarajh
Abstract:
Duchenne muscular dystrophy (DMD) is a severe form of X-linked muscular dystrophy caused by mutations in the dystrophin gene resulting in progressive skeletal muscle weakness. Boys with DMD also have significant cognitive disabilities. The intelligence quotient of boys with DMD, compared to peers, is approximately one standard deviation below average. Detailed neuropsychological testing has demonstrated that boys with DMD have a global developmental impairment, with verbal memory and visuospatial skills most significantly affected. Furthermore, the total brain volume and gray matter volume are lower in children with DMD compared to age-matched controls. These results are suggestive of a significant structural and functional compromise to the developing brain as a result of absent dystrophin protein expression. There is also some genetic evidence to suggest that mutations in the 3’ end of the DMD gene are associated with more severe neurocognitive problems. Our working hypothesis is that (i) boys with DMD do not make gains in neurodevelopmental skills compared to typically developing children and (ii) women carriers of DMD mutations may have subclinical cognitive deficits. We also hypothesize that there may be an intergenerational vulnerability of cognition, with boys of DMD-carrier mothers being more affected cognitively than boys of non-DMD-carrier mothers. The objectives of this study are: 1. Assess the neurodevelopment in boys with DMD at 4-time points and perform baseline neuroradiological assessment, 2. Assess cognition in biological mothers of DMD participants at baseline, 3. Assess possible correlation between DMD mutation and cognitive measures. This study also explores functional brain abnormalities in people with DMD by exploring how regional and global connectivity of the brain underlies executive function deficits in DMD. Such research can contribute to a better holistic understanding of the cognition alterations due to DMD and could potentially allow clinicians to create better-tailored treatment plans for the DMD population. There are four study visits for each participant (baseline, 2-4 weeks, 1 year, 18 months). At each visit, the participant completes the NIH Toolbox Cognition Battery, a validated psychometric measure that is recommended by NIH Common Data Elements for use in DMD. Visits 1, 3, and 4 also involve the administration of the BRIEF-2, ABAS-3, PROMIS/NeuroQoL, PedsQL Neuromuscular module 3.0, Draw a Clock Test, and an optional fMRI scan with the N-back matching task. We expect to enroll 52 children with DMD, 52 mothers of children with DMD, and 30 healthy control boys. This study began in 2020 during the height of the COVID-19 pandemic. Due to this, there were subsequent delays in recruitment because of travel restrictions. However, we have persevered and continued to recruit new participants for the study. We partnered with the Muscular Dystrophy Association (MDA) and helped advertise the study to interested families. Since then, we have had families from across the country contact us about their interest in the study. We plan to continue to enroll a diverse population of DMD participants to contribute toward a better understanding of Duchenne Muscular Dystrophy.Keywords: neurology, Duchenne muscular dystrophy, muscular dystrophy, cognition, neurodevelopment, x-linked disorder, DMD, DMD gene
Procedia PDF Downloads 9917602 Mapping the Pain Trajectory of Breast Cancer Survivors: Results from a Retrospective Chart Review
Authors: Wilfred Elliam
Abstract:
Background: Pain is a prevalent and debilitating symptom among breast cancer patients, impacting their quality of life and overall well-being. The experience of pain in this population is multifaceted, influenced by a combination of disease-related factors, treatment side effects, and individual characteristics. Despite advancements in cancer treatment and pain management, many breast cancer patients continue to suffer from chronic pain, which can persist long after the completion of treatment. Understanding the progression of pain in breast cancer patients over time and identifying its correlates is crucial for effective pain management and supportive care strategies. The purpose of this research is to understand the patterns and progression of pain experienced by breast cancer survivors over time. Methods: Data were collected from breast cancer patients at Hartford Hospital at four time points: baseline, 3, 6 and 12 weeks. Key variables measured include pain, body mass index (BMI), fatigue, musculoskeletal pain, sleep disturbance, and demographic variables (age, employment status, cancer stage, and ethnicity). Binomial generalized linear mixed models were used to examine changes in pain and symptoms over time. Results: A total of 100 breast cancer patients aged 18 years old were included in the analysis. We found that the effect of time on pain (p = 0.024), musculoskeletal pain (p= <0.001), fatigue (p= <0.001), and sleep disturbance (p-value = 0.013) were statistically significant with pain progression in breast cancer patients. Patients using aromatase inhibitors have worse fatigue (<0.05) and musculoskeletal pain (<0.001) compared to patients with Tamoxifen. Patients who are obese (<0.001) and overweight (<0.001) are more likely to report pain compared to patients with normal weight. Conclusion: This study revealed the complex interplay between various factors such as time, pain, sleep disturbance in breast cancer patient. Specifically, pain, musculoskeletal pain, sleep disturbance, fatigue exhibited significant changes across the measured time points, indicating a dynamic pain progression in these patients. The findings provide a foundation for future research and targeted interventions aimed at improving pain in breast cancer patient outcomes.Keywords: breast cancer, chronic pain, pain management, quality of life
Procedia PDF Downloads 3117601 Broadband Ultrasonic and Rheological Characterization of Liquids Using Longitudinal Waves
Authors: M. Abderrahmane Mograne, Didier Laux, Jean-Yves Ferrandis
Abstract:
Rheological characterizations of complex liquids like polymer solutions present an important scientific interest for a lot of researchers in many fields as biology, food industry, chemistry. In order to establish master curves (elastic moduli vs frequency) which can give information about microstructure, classical rheometers or viscometers (such as Couette systems) are used. For broadband characterization of the sample, temperature is modified in a very large range leading to equivalent frequency modifications applying the Time Temperature Superposition principle. For many liquids undergoing phase transitions, this approach is not applicable. That is the reason, why the development of broadband spectroscopic methods around room temperature becomes a major concern. In literature many solutions have been proposed but, to our knowledge, there is no experimental bench giving the whole rheological characterization for frequencies about a few Hz (Hertz) to many MHz (Mega Hertz). Consequently, our goal is to investigate in a nondestructive way in very broadband frequency (A few Hz – Hundreds of MHz) rheological properties using longitudinal ultrasonic waves (L waves), a unique experimental bench and a specific container for the liquid: a test tube. More specifically, we aim to estimate the three viscosities (longitudinal, shear and bulk) and the complex elastic moduli (M*, G* and K*) respectively longitudinal, shear and bulk moduli. We have decided to use only L waves conditioned in two ways: bulk L wave in the liquid or guided L waves in the tube test walls. In this paper, we will present first results for very low frequencies using the ultrasonic tracking of a falling ball in the test tube. This will lead to the estimation of shear viscosity from a few mPa.s to a few Pa.s (Pascal second). Corrections due to the small dimensions of the tube will be applied and discussed regarding the size of the falling ball. Then the use of bulk L wave’s propagation in the liquid and the development of a specific signal processing in order to assess longitudinal velocity and attenuation will conduct to the longitudinal viscosity evaluation in the MHz frequency range. At last, the first results concerning the propagation, the generation and the processing of guided compressional waves in the test tube walls will be discussed. All these approaches and results will be compared to standard methods available and already validated in our lab.Keywords: nondestructive measurement for liquid, piezoelectric transducer, ultrasonic longitudinal waves, viscosities
Procedia PDF Downloads 26517600 A Consumption-Based Hybrid Life Cycle Assessment of Carbon Footprints in California: High Footprints in Small Urban Households
Authors: Jukka Heinonen
Abstract:
Higher density reduces distances, private car dependency and thus reduces greenhouse gas emissions (GHGs). As a result, increased density has been given a central role among urban development targets. However, it is not just travel behavior that changes along with density. Rather, the consumption patterns, or overall lifestyles, change along with changing urban structure, particularly with changing housing types and consumption opportunities. Furthermore, elevated consumption of services, more frequent flying and less intra-household sharing have been shown to potentially outweigh the gains from reduced driving in more dense urban settlements. In this study, the geography of carbon footprints (CFs) in California is analyzed paying close attention to the household size differences and the resulting economies-of-scale advantages and disadvantages. A hybrid life cycle assessment (LCA) framework is employed together with consumer expenditure data to assess the CFs. According to the study, small urban households have the highest CFs in California. Their transport related emissions are significantly lower than those of the residents of less urbanized areas, but higher emissions from other consumption categories, together with the low degree of sharing of goods, overweigh the gains. Two functional units, per capita and per household, are used to analyze the CFs and to demonstrate the importance of household size. The lifestyle impacts visible through the consumption data are also discussed. The study suggests that there are still significant gaps in our understanding of the premises of low-carbon human settlements.Keywords: carbon footprint, life cycle assessment, lifestyle, household size, consumption, economies-of-scale
Procedia PDF Downloads 35517599 Solving Directional Overcurrent Relay Coordination Problem Using Artificial Bees Colony
Authors: M. H. Hussain, I. Musirin, A. F. Abidin, S. R. A. Rahim
Abstract:
This paper presents the implementation of Artificial Bees Colony (ABC) algorithm in solving Directional OverCurrent Relays (DOCRs) coordination problem for near-end faults occurring in fixed network topology. The coordination optimization of DOCRs is formulated as linear programming (LP) problem. The objective function is introduced to minimize the operating time of the associated relay which depends on the time multiplier setting. The proposed technique is to taken as a technique for comparison purpose in order to highlight its superiority. The proposed algorithms have been tested successfully on 8 bus test system. The simulation results demonstrated that the ABC algorithm which has been proved to have good search ability is capable in dealing with constraint optimization problems.Keywords: artificial bees colony, directional overcurrent relay coordination problem, relay settings, time multiplier setting
Procedia PDF Downloads 33017598 Increase Productivity by Using Work Measurement Technique
Authors: Mohammed Al Awadh
Abstract:
In order for businesses to take advantage of the opportunities for expanded production and trade that have arisen as a result of globalization and increased levels of competition, productivity growth is required. The number of available sources is decreasing with each passing day, which results in an ever-increasing demand. In response to this, there will be an increased demand placed on firms to improve the efficiency with which they utilise their resources. As a scientific method, work and time research techniques have been employed in all manufacturing and service industries to raise the efficiency of use of the factors of production. These approaches focus on work and time. The goal of this research is to improve the productivity of a manufacturing industry's production system by looking at ways to measure work. The work cycles were broken down into more manageable and quantifiable components. On the observation sheet, these aspects were noted down. The operation has been properly analysed in order to identify value-added and non-value-added components, and observations have been recorded for each of the different trails.Keywords: time study, work measurement, work study, efficiency
Procedia PDF Downloads 6917597 Cascaded Neural Network for Internal Temperature Forecasting in Induction Motor
Authors: Hidir S. Nogay
Abstract:
In this study, two systems were created to predict interior temperature in induction motor. One of them consisted of a simple ANN model which has two layers, ten input parameters and one output parameter. The other one consisted of eight ANN models connected each other as cascaded. Cascaded ANN system has 17 inputs. Main reason of cascaded system being used in this study is to accomplish more accurate estimation by increasing inputs in the ANN system. Cascaded ANN system is compared with simple conventional ANN model to prove mentioned advantages. Dataset was obtained from experimental applications. Small part of the dataset was used to obtain more understandable graphs. Number of data is 329. 30% of the data was used for testing and validation. Test data and validation data were determined for each ANN model separately and reliability of each model was tested. As a result of this study, it has been understood that the cascaded ANN system produced more accurate estimates than conventional ANN model.Keywords: cascaded neural network, internal temperature, inverter, three-phase induction motor
Procedia PDF Downloads 34517596 Analysis on Prediction Models of TBM Performance and Selection of Optimal Input Parameters
Authors: Hang Lo Lee, Ki Il Song, Hee Hwan Ryu
Abstract:
An accurate prediction of TBM(Tunnel Boring Machine) performance is very difficult for reliable estimation of the construction period and cost in preconstruction stage. For this purpose, the aim of this study is to analyze the evaluation process of various prediction models published since 2000 for TBM performance, and to select the optimal input parameters for the prediction model. A classification system of TBM performance prediction model and applied methodology are proposed in this research. Input and output parameters applied for prediction models are also represented. Based on these results, a statistical analysis is performed using the collected data from shield TBM tunnel in South Korea. By performing a simple regression and residual analysis utilizinFg statistical program, R, the optimal input parameters are selected. These results are expected to be used for development of prediction model of TBM performance.Keywords: TBM performance prediction model, classification system, simple regression analysis, residual analysis, optimal input parameters
Procedia PDF Downloads 309