Search results for: gait time
17277 Evaluation of the Effectiveness of a HAWK Signal on Compliance in Las Vegas Nevada
Authors: A. Paz, M. Khadka, N. Veeramisti, B. Morris
Abstract:
There is a continuous large number of crashes involving pedestrians in Nevada despite the numerous safety mechanisms currently used at roadway crossings. Hence, additional as well as more effective mechanisms are required to reduce crashes in Las Vegas, in particular, and Nevada in general. A potential mechanism to reduce conflicts between pedestrians and vehicles is a High-intensity Activated crossWalK (HAWK) signal. This study evaluates the effects of such signals at a particular site in Las Vegas. Video data were collected using two cameras, facing the eastbound and westbound traffic. One week of video data before and after the deployment of the signal were collected to capture the behavior of both pedestrians and drivers. T-test analyses of pedestrian waiting time at the curb, curb-to-curb crossing time, total crossing time, jaywalking events, and near-crash events show that the HAWK system provides significant benefits.Keywords: pedestrian crashes, HAWK signal, traffic safety, pedestrian danger index
Procedia PDF Downloads 34117276 Ultracapacitor State-of-Energy Monitoring System with On-Line Parameter Identification
Authors: N. Reichbach, A. Kuperman
Abstract:
The paper describes a design of a monitoring system for super capacitor packs in propulsion systems, allowing determining the instantaneous energy capacity under power loading. The system contains real-time recursive-least-squares identification mechanism, estimating the values of pack capacitance and equivalent series resistance. These values are required for accurate calculation of the state-of-energy.Keywords: real-time monitoring, RLS identification algorithm, state-of-energy, super capacitor
Procedia PDF Downloads 53517275 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things
Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker
Abstract:
Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.Keywords: CUSUM, evidence theory, kl divergence, quickest change detection, time series data
Procedia PDF Downloads 33417274 Optimization of Wire EDM Parameters for Fabrication of Micro Channels
Authors: Gurinder Singh Brar, Sarbjeet Singh, Harry Garg
Abstract:
Wire Electric Discharge Machining (WEDM) is thermal machining process capable of machining very hard electrically conductive material irrespective of their hardness. WEDM is being widely used to machine micro-scale parts with the high dimensional accuracy and surface finish. The objective of this paper is to optimize the process parameters of wire EDM to fabricate the microchannels and to calculate the surface finish and material removal rate of microchannels fabricated using wire EDM. The material used is aluminum 6061 alloy. The experiments were performed using CNC wire cut electric discharge machine. The effect of various parameters of WEDM like pulse on time (TON) with the levels (100, 150, 200), pulse off time (TOFF) with the levels (25, 35, 45) and current (IP) with the levels (105, 110, 115) were investigated to study the effect on output parameter i.e. Surface Roughness and Material Removal Rate (MRR). Each experiment was conducted under different conditions of a pulse on time, pulse off time and peak current. For material removal rate, TON and Ip were the most significant process parameter. MRR increases with the increase in TON and Ip and decreases with the increase in TOFF. For surface roughness, TON and Ip have the maximum effect and TOFF was found out to be less effective.Keywords: microchannels, Wire Electric Discharge Machining (WEDM), Metal Removal Rate (MRR), surface finish
Procedia PDF Downloads 49917273 Quantitative Analysis of Camera Setup for Optical Motion Capture Systems
Authors: J. T. Pitale, S. Ghassab, H. Ay, N. Berme
Abstract:
Biomechanics researchers commonly use marker-based optical motion capture (MoCap) systems to extract human body kinematic data. These systems use cameras to detect passive or active markers placed on the subject. The cameras use triangulation methods to form images of the markers, which typically require each marker to be visible by at least two cameras simultaneously. Cameras in a conventional optical MoCap system are mounted at a distance from the subject, typically on walls, ceiling as well as fixed or adjustable frame structures. To accommodate for space constraints and as portable force measurement systems are getting popular, there is a need for smaller and smaller capture volumes. When the efficacy of a MoCap system is investigated, it is important to consider the tradeoff amongst the camera distance from subject, pixel density, and the field of view (FOV). If cameras are mounted relatively close to a subject, the area corresponding to each pixel reduces, thus increasing the image resolution. However, the cross section of the capture volume also decreases, causing reduction of the visible area. Due to this reduction, additional cameras may be required in such applications. On the other hand, mounting cameras relatively far from the subject increases the visible area but reduces the image quality. The goal of this study was to develop a quantitative methodology to investigate marker occlusions and optimize camera placement for a given capture volume and subject postures using three-dimension computer-aided design (CAD) tools. We modeled a 4.9m x 3.7m x 2.4m (LxWxH) MoCap volume and designed a mounting structure for cameras using SOLIDWORKS (Dassault Systems, MA, USA). The FOV was used to generate the capture volume for each camera placed on the structure. A human body model with configurable posture was placed at the center of the capture volume on CAD environment. We studied three postures; initial contact, mid-stance, and early swing. The human body CAD model was adjusted for each posture based on the range of joint angles. Markers were attached to the model to enable a full body capture. The cameras were placed around the capture volume at a maximum distance of 2.7m from the subject. We used the Camera View feature in SOLIDWORKS to generate images of the subject as seen by each camera and the number of markers visible to each camera was tabulated. The approach presented in this study provides a quantitative method to investigate the efficacy and efficiency of a MoCap camera setup. This approach enables optimization of a camera setup through adjusting the position and orientation of cameras on the CAD environment and quantifying marker visibility. It is also possible to compare different camera setup options on the same quantitative basis. The flexibility of the CAD environment enables accurate representation of the capture volume, including any objects that may cause obstructions between the subject and the cameras. With this approach, it is possible to compare different camera placement options to each other, as well as optimize a given camera setup based on quantitative results.Keywords: motion capture, cameras, biomechanics, gait analysis
Procedia PDF Downloads 31017272 Aging and Mechanical Behavior of Be-treated 7075 Aluminum Alloys
Authors: Mahmoud M. Tash, S. Alkahtani
Abstract:
The present study was undertaken to investigate the effect of pre-aging and aging parameters (time and temperature) on the mechanical properties of Al-Mg-Zn (7075) alloys. Ultimate tensile strength, 0.5% offset yield strength and % elongation measurements were carried out on specimens prepared from cast and heat treated 7075 alloys. Aging treatments were carried out for the as solution treated (SHT) specimens (after quenching in warm water). The specimens were aged at different conditions; Natural aging was carried out at room temperature for different periods of time. Double aging was performed for SHT conditions (pre-aged at different time and temperature followed by high temperature aging). Ultimate tensile strength, yield strength and % elongation as a function of different pre-aging and aging parameters are analysed to acquire an understanding of the effects of these variables and their interactions on the mechanical properties of Be-treated 7075 alloys.Keywords: duplex aging treatment, mechanical properties, Al-Mg-Zn (7075) alloys, manufacturing
Procedia PDF Downloads 24017271 Application of Laser Spectroscopy for Detection of Actinides and Lanthanides in Solutions
Authors: Igor Izosimov
Abstract:
This work is devoted to applications of the Time-resolved laser-induced luminescence (TRLIF) spectroscopy and time-resolved laser-induced chemiluminescence spectroscopy for detection of lanthanides and actinides. Results of the experiments on Eu, Sm, U, and Pu detection in solutions are presented. The limit of uranyl detection (LOD) in urine in our TRLIF experiments was up to 5 pg/ml. In blood plasma LOD was 0.1 ng/ml and after mineralization was up to 8pg/ml – 10pg/ml. In pure solution, the limit of detection of europium was 0.005ng/ml and samarium, 0.07ng/ml. After addition urine, the limit of detection of europium was 0.015 ng/ml and samarium, 0.2 ng/ml. Pu, Np, and some U compounds do not produce direct luminescence in solutions, but when excited by laser radiation, they can induce chemiluminescence of some chemiluminogen (luminol in our experiments). It is shown that multi-photon scheme of chemiluminescence excitation makes chemiluminescence not only a highly sensitive but also a highly selective tool for the detection of lanthanides/actinides in solutions.Keywords: actinides/lanthanides detection, laser spectroscopy with time resolution, luminescence/chemiluminescence, solutions
Procedia PDF Downloads 33317270 Assessing Traffic Calming Measures for Safe and Accessible Emergency Routes in Norrkoping City in Sweden
Authors: Ghazwan Al-Haji
Abstract:
Most accidents occur in urban areas, and the most related casualties are vulnerable road users (pedestrians and cyclists). The traffic calming measures (TCMs) are widely used and considered to be successful in reducing speed and traffic volume. However, TCMs create unwanted effects include: noise, emissions, energy consumption, vehicle delays and emergency response time (ERT). Different vertical and horizontal TCMs have been already applied nationally (Sweden) and internationally with different impacts. It is a big challenge among traffic engineers, planners, and policy-makers to choose and priorities the best TCMs to be implemented. This study will assess the existing guidelines for TCMs in relation to safety and ERT with focus on data from Norrkoping city in Sweden. The expected results will save lives, time, and money on particularly Swedish Roads. The study will also review newly technologies and how they can improve safety and reduce ERT.Keywords: traffic calming measures, traffic safety, delay time, vulnerable road users
Procedia PDF Downloads 14017269 Process Driven Architecture For The ‘Lessons Learnt’ Knowledge Sharing Framework: The Case Of A ‘Lessons Learnt’ Framework For KOC
Authors: Rima Al-Awadhi, Abdul Jaleel Tharayil
Abstract:
On a regular basis, KOC engages into various types of Projects. However, due to very nature and complexity involved, each project experience generates a lot of ‘learnings’ that need to be factored into while drafting a new contract and thus avoid repeating the same mistakes. But, many a time these learnings are localized and remain as tacit leading to scope re-work, larger cycle time, schedule overrun, adjustment orders and claims. Also, these experiences are not readily available to new employees leading to steep learning curve and longer time to competency. This is to share our experience in designing and implementing a process driven architecture for the ‘lessons learnt’ knowledge sharing framework in KOC. It high-lights the ‘lessons learnt’ sharing process adopted, integration with the organizational processes, governance framework, the challenges faced and learning from our experience in implementing a ‘lessons learnt’ framework.Keywords: lessons learnt, knowledge transfer, knowledge sharing, successful practices, Lessons Learnt Workshop, governance framework
Procedia PDF Downloads 57817268 Income-Consumption Relationships in Pakistan (1980-2011): A Cointegration Approach
Authors: Himayatullah Khan, Alena Fedorova
Abstract:
The present paper analyses the income-consumption relationships in Pakistan using annual time series data from 1980-81 to 2010-1. The paper uses the Augmented Dickey-Fuller test to check the unit root and stationarity in these two time series. The paper finds that the two time series are nonstationary but stationary at their first difference levels. The Augmented Engle-Granger test and the Cointegrating Regression Durbin-Watson test imply that the two time series of consumption and income are cointegrated and that long-run marginal propensity to consume is 0.88 which is given by the estimated (static) equilibrium relation. The paper also used the error correction mechanism to find out to model dynamic relationship. The purpose of the ECM is to indicate the speed of adjustment from the short-run equilibrium to the long-run equilibrium state. The results show that MPC is equal to 0.93 and is highly significant. The coefficient of Engle-Granger residuals is negative but insignificant. Statistically, the equilibrium error term is zero, which suggests that consumption adjusts to changes in GDP in the same period. The short-run changes in GDP have a positive impact on short-run changes in consumption. The paper concludes that we may interpret 0.93 as the short-run MPC. The pair-wise Granger Causality test shows that both GDP and consumption Granger cause each other.Keywords: cointegrating regression, Augmented Dickey Fuller test, Augmented Engle-Granger test, Granger causality, error correction mechanism
Procedia PDF Downloads 41417267 Wind Speed Data Analysis in Colombia in 2013 and 2015
Authors: Harold P. Villota, Alejandro Osorio B.
Abstract:
The energy meteorology is an area for study energy complementarity and the use of renewable sources in interconnected systems. Due to diversify the energy matrix in Colombia with wind sources, is necessary to know the data bases about this one. However, the time series given by 260 automatic weather stations have empty, and no apply data, so the purpose is to fill the time series selecting two years to characterize, impute and use like base to complete the data between 2005 and 2020.Keywords: complementarity, wind speed, renewable, colombia, characteri, characterization, imputation
Procedia PDF Downloads 16417266 Tracking the Effect of Ibutilide on Amplitude and Frequency of Fibrillatory Intracardiac Electrograms Using the Regression Analysis
Authors: H. Hajimolahoseini, J. Hashemi, D. Redfearn
Abstract:
Background: Catheter ablation is an effective therapy for symptomatic atrial fibrillation (AF). The intracardiac electrocardiogram (IEGM) collected during this procedure contains precious information that has not been explored to its full capacity. Novel processing techniques allow looking at these recordings from different perspectives which can lead to improved therapeutic approaches. In our previous study, we showed that variation in amplitude measured through Shannon Entropy could be used as an AF recurrence risk stratification factor in patients who received Ibutilide before the electrograms were recorded. The aim of this study is to further investigate the effect of Ibutilide on characteristics of the recorded signals from the left atrium (LA) of a patient with persistent AF before and after administration of the drug. Methods: The IEGMs collected from different intra-atrial sites of 12 patients were studied and compared before and after Ibutilide administration. First, the before and after Ibutilide IEGMs that were recorded within a Euclidian distance of 3 mm in LA were selected as pairs for comparison. For every selected pair of IEGMs, the Probability Distribution Function (PDF) of the amplitude in time domain and magnitude in frequency domain was estimated using the regression analysis. The PDF represents the relative likelihood of a variable falling within a specific range of values. Results: Our observations showed that in time domain, the PDF of amplitudes was fitted to a Gaussian distribution while in frequency domain, it was fitted to a Rayleigh distribution. Our observations also revealed that after Ibutilide administration, the IEGMs would have significantly narrower short-tailed PDFs both in time and frequency domains. Conclusion: This study shows that the PDFs of the IEGMs before and after administration of Ibutilide represents significantly different properties, both in time and frequency domains. Hence, by fitting the PDF of IEGMs in time domain to a Gaussian distribution or in frequency domain to a Rayleigh distribution, the effect of Ibutilide can easily be tracked using the statistics of their PDF (e.g., standard deviation) while this is difficult through the waveform of IEGMs itself.Keywords: atrial fibrillation, catheter ablation, probability distribution function, time-frequency characteristics
Procedia PDF Downloads 15917265 Study on an Integrated Real-Time Sensor in Droplet-Based Microfluidics
Authors: Tien-Li Chang, Huang-Chi Huang, Zhao-Chi Chen, Wun-Yi Chen
Abstract:
The droplet-based microfluidic are used as micro-reactors for chemical and biological assays. Hence, the precise addition of reagents into the droplets is essential for this function in the scope of lab-on-a-chip applications. To obtain the characteristics (size, velocity, pressure, and frequency of production) of droplets, this study describes an integrated on-chip method of real-time signal detection. By controlling and manipulating the fluids, the flow behavior can be obtained in the droplet-based microfluidics. The detection method is used a type of infrared sensor. Through the varieties of droplets in the microfluidic devices, the real-time conditions of velocity and pressure are gained from the sensors. Here the microfluidic devices are fabricated by polydimethylsiloxane (PDMS). To measure the droplets, the signal acquisition of sensor and LabVIEW program control must be established in the microchannel devices. The devices can generate the different size droplets where the flow rate of oil phase is fixed 30 μl/hr and the flow rates of water phase range are from 20 μl/hr to 80 μl/hr. The experimental results demonstrate that the sensors are able to measure the time difference of droplets under the different velocity at the voltage from 0 V to 2 V. Consequently, the droplets are measured the fastest speed of 1.6 mm/s and related flow behaviors that can be helpful to develop and integrate the practical microfluidic applications.Keywords: microfluidic, droplets, sensors, single detection
Procedia PDF Downloads 49317264 Data Quality Enhancement with String Length Distribution
Authors: Qi Xiu, Hiromu Hota, Yohsuke Ishii, Takuya Oda
Abstract:
Recently, collectable manufacturing data are rapidly increasing. On the other hand, mega recall is getting serious as a social problem. Under such circumstances, there are increasing needs for preventing mega recalls by defect analysis such as root cause analysis and abnormal detection utilizing manufacturing data. However, the time to classify strings in manufacturing data by traditional method is too long to meet requirement of quick defect analysis. Therefore, we present String Length Distribution Classification method (SLDC) to correctly classify strings in a short time. This method learns character features, especially string length distribution from Product ID, Machine ID in BOM and asset list. By applying the proposal to strings in actual manufacturing data, we verified that the classification time of strings can be reduced by 80%. As a result, it can be estimated that the requirement of quick defect analysis can be fulfilled.Keywords: string classification, data quality, feature selection, probability distribution, string length
Procedia PDF Downloads 31817263 Comparative Study of Outcomes of Nonfixation of Mesh versus Fixation in Laparoscopic Total Extra Peritoneal (TEP) Repair of Inguinal Hernia: A Prospective Randomized Controlled Trial
Authors: Raman Sharma, S. K. Jain
Abstract:
Aims and Objectives: Fixation of the mesh during laparoscopic total extraperitoneal (TEP) repair of inguinal hernia is thought to be necessary to prevent recurrence. However, mesh fixation may increase surgical complications and postoperative pain. Our objective was to compare the outcomes of nonfixation with fixation of polypropylene mesh by metal tacks during TEP repair of inguinal hernia. Methods: Forty patients aged 18 to72 years with inguinal hernia were included who underwent laparoscopic TEP repair of inguinal hernia with (n=20) or without (n=20) fixation of the mesh. The outcomes were operative duration, postoperative pain score, cost, in-hospital stay, time to return to normal activity, and complications. Results: Patients in whom the mesh was not fixed had shorter mean operating time (p < 0.05). We found no difference between groups in the postoperative pain score, incidence of recurrence, in-hospital stay, time to return to normal activity and complications (P > 0.05). Moreover, a net cost savings was realized for each hernia repair performed without stapled mesh. Conclusions: TEP repair without mesh fixation resulted in the shorter operating time and lower operative cost with no difference between groups in the postoperative pain score, incidence of recurrence, in-hospital stay, time to return to normal activity and complications. All this contribute to make TEP repair without mesh fixation a better choice for repair of uncomplicated inguinal hernia, especially in developing nations with scarce resources.Keywords: postoperative pain score, inguinal hernia, nonfixation of mesh, total extra peritoneal (TEP)
Procedia PDF Downloads 34317262 Quality Control of 99mTc-Labeled Radiopharmaceuticals Using the Chromatography Strips
Authors: Yasuyuki Takahashi, Akemi Yoshida, Hirotaka Shimada
Abstract:
99mTc-2-methoxy-isobutyl-isonitrile (MIBI) and 99mTcmercaptoacetylgylcylglycyl-glycine (MAG3 ) are heat to 368-372K and are labeled with 99mTc-pertechnetate. Quality control (QC) of 99mTc-labeled radiopharmaceuticals is performed at hospitals, using liquid chromatography, which is difficult to perform in general hospitals. We used chromatography strips to simplify QC and investigated the effects of the test procedures on quality control. In this study is 99mTc- MAG3. Solvent using chloroform + acetone + tetrahydrofuran, and the gamma counter was ARC-380CL. The changed conditions are as follows; heating temperature, resting time after labeled, and expiration year for use: which were 293, 313, 333, 353 and 372K; 15 min (293K and 372K) and 1 hour (293K); and 2011, 2012, 2013, 2014 and 2015 respectively were tested. Measurement time using the gamma counter was one minute. A nuclear medical clinician decided the quality of the preparation in judging the usability of the retest agent. Two people conducted the test procedure twice, in order to compare reproducibility. The percentage of radiochemical purity (% RCP) was approximately 50% under insufficient heat treatment, which improved as the temperature and heating time increased. Moreover, the % RCP improved with time even under low temperatures. Furthermore, there was no deterioration with time after the expiration date. The objective of these tests was to determine soluble 99mTc impurities, including 99mTc-pertechnetate and the hydrolyzed-reduced 99mTc. Therefore, we assumed that insufficient heating and heating to operational errors in the labeling. It is concluded that quality control is a necessary procedure in nuclear medicine to ensure safe scanning. It is suggested that labeling is necessary to identify specifications.Keywords: quality control, tc-99m labeled radio-pharmaceutical, chromatography strip, nuclear medicine
Procedia PDF Downloads 32217261 Adaptive Optimal Controller for Uncertain Inverted Pendulum System: A Dynamic Programming Approach for Continuous Time System
Authors: Dao Phuong Nam, Tran Van Tuyen, Do Trong Tan, Bui Minh Dinh, Nguyen Van Huong
Abstract:
In this paper, we investigate the adaptive optimal control law for continuous-time systems with input disturbances and unknown parameters. This paper extends previous works to obtain the robust control law of uncertain systems. Through theoretical analysis, an adaptive dynamic programming (ADP) based optimal control is proposed to stabilize the closed-loop system and ensure the convergence properties of proposed iterative algorithm. Moreover, the global asymptotic stability (GAS) for closed system is also analyzed. The theoretical analysis for continuous-time systems and simulation results demonstrate the performance of the proposed algorithm for an inverted pendulum system.Keywords: approximate/adaptive dynamic programming, ADP, adaptive optimal control law, input state stability, ISS, inverted pendulum
Procedia PDF Downloads 19517260 Unsteady 3D Post-Stall Aerodynamics Accounting for Effective Loss in Camber Due to Flow Separation
Authors: Aritras Roy, Rinku Mukherjee
Abstract:
The current study couples a quasi-steady Vortex Lattice Method and a camber correcting technique, ‘Decambering’ for unsteady post-stall flow prediction. The wake is force-free and discrete such that the wake lattices move with the free-stream once shed from the wing. It is observed that the time-averaged unsteady coefficient of lift sees a relative drop at post-stall angles of attack in comparison to its steady counterpart for some angles of attack. Multiple solutions occur at post-stall and three different algorithms to choose solutions in these regimes show both unsteadiness and non-convergence of the iterations. The distribution of coefficient of lift on the wing span also shows sawtooth. Distribution of vorticity changes both along span and in the direction of the free-stream as the wake develops over time with distinct roll-up, which increases with time.Keywords: post-stall, unsteady, wing, aerodynamics
Procedia PDF Downloads 37017259 Study on Compressive Strength and Setting Time of Fly Ash Concrete after Slump Recovery Using Superplasticizer
Authors: Chaiyakrit Raoupatham, Ram Hari Dhakal, Chalermchai Wanichlamlert
Abstract:
Fresh concrete that is on bound to be rejected due to belated use either from delay construction process or unflavored traffic cause delay on concrete delivering can recover the slump and use once again by introduce second dose of superplasticizer(naphthalene based type F) into system. By adding superplasticizer as solution for recover unusable slump loss concrete may affects other concrete properties. Therefore, this paper was observed setting time and compressive strength of concrete after being re-dose with chemical admixture type F (superplasticizer, naphthalene based) for slump recovery. The concrete used in this study was fly ash concrete with fly ash replacement of 0%, 30% and 50% respectively. Concrete mix designed for test specimen was prepared with paste content (ratio of volume of cement to volume of void in the aggregate) of 1.2 and 1.3, water-to-binder ratio (w/b) range of 0.3 to 0.58, initial dose of superplasticizer (SP) range from 0.5 to 1.6%. The setting time of concrete were tested both before and after re-dosed with different amount of second dose and time of dosing. The research was concluded that addition of second dose of superplasticizer would increase both initial and final setting times accordingly to dosage of addition. As for fly ash concrete, the prolongation effect was higher as the replacement of fly ash is increase. The prolongation effect can reach up to maximum about 4 hours. In case of compressive strength, the re-dosed concrete has strength fluctuation within acceptable range of ±10%.Keywords: compressive strength, fly ash concrete, second dose of superplasticizer, setting times
Procedia PDF Downloads 28117258 Noise Source Identification on Urban Construction Sites Using Signal Time Delay Analysis
Authors: Balgaisha G. Mukanova, Yelbek B. Utepov, Aida G. Nazarova, Alisher Z. Imanov
Abstract:
The problem of identifying local noise sources on a construction site using a sensor system is considered. Mathematical modeling of detected signals on sensors was carried out, considering signal decay and signal delay time between the source and detector. Recordings of noises produced by construction tools were used as a dependence of noise on time. Synthetic sensor data was constructed based on these data, and a model of the propagation of acoustic waves from a point source in the three-dimensional space was applied. All sensors and sources are assumed to be located in the same plane. A source localization method is checked based on the signal time delay between two adjacent detectors and plotting the direction of the source. Based on the two direct lines' crossline, the noise source's position is determined. Cases of one dominant source and the case of two sources in the presence of several other sources of lower intensity are considered. The number of detectors varies from three to eight detectors. The intensity of the noise field in the assessed area is plotted. The signal of a two-second duration is considered. The source is located for subsequent parts of the signal with a duration above 0.04 sec; the final result is obtained by computing the average value.Keywords: acoustic model, direction of arrival, inverse source problem, sound localization, urban noises
Procedia PDF Downloads 6217257 Inter-Annual Variations of Sea Surface Temperature in the Arabian Sea
Authors: K. S. Sreejith, C. Shaji
Abstract:
Though both Arabian Sea and its counterpart Bay of Bengal is forced primarily by the semi-annually reversing monsoons, the spatio-temporal variations of surface waters is very strong in the Arabian Sea as compared to the Bay of Bengal. This study focuses on the inter-annual variability of Sea Surface Temperature (SST) in the Arabian Sea by analysing ERSST dataset which covers 152 years of SST (January 1854 to December 2002) based on the ICOADS in situ observations. To capture the dominant SST oscillations and to understand the inter-annual SST variations at various local regions of the Arabian Sea, wavelet analysis was performed on this long time-series SST dataset. This tool is advantageous over other signal analysing tools like Fourier analysis, based on the fact that it unfolds a time-series data (signal) both in frequency and time domain. This technique makes it easier to determine dominant modes of variability and explain how those modes vary in time. The analysis revealed that pentadal SST oscillations predominate at most of the analysed local regions in the Arabian Sea. From the time information of wavelet analysis, it was interpreted that these cold and warm events of large amplitude occurred during the periods 1870-1890, 1890-1910, 1930-1950, 1980-1990 and 1990-2005. SST oscillations with peaks having period of ~ 2-4 years was found to be significant in the central and eastern regions of Arabian Sea. This indicates that the inter-annual SST variation in the Indian Ocean is affected by the El Niño-Southern Oscillation (ENSO) and Indian Ocean Dipole (IOD) events.Keywords: Arabian Sea, ICOADS, inter-annual variation, pentadal oscillation, SST, wavelet analysis
Procedia PDF Downloads 27617256 Impacts of Climate Elements on the Annual Periodic Behavior of the Shallow Groundwater Level: Case Study from Central-Eastern Europe
Authors: Tamas Garamhegyi, Jozsef Kovacs, Rita Pongracz, Peter Tanos, Balazs Trasy, Norbert Magyar, Istvan G. Hatvani
Abstract:
Like most environmental processes, shallow groundwater fluctuation under natural circumstances also behaves periodically. With the statistical tools at hand, it can easily be determined if a period exists in the data or not. Thus, the question may be raised: Does the estimated average period time characterize the whole time period, or not? This is especially important in the case of such complex phenomena as shallow groundwater fluctuation, driven by numerous factors. Because of the continuous changes in the oscillating components of shallow groundwater time series, the most appropriate method should be used to investigate its periodicity, this is wavelet spectrum analysis. The aims of the research were to investigate the periodic behavior of the shallow groundwater time series of an agriculturally important and drought sensitive region in Central-Eastern Europe and its relationship to the European pressure action centers. During the research ~216 shallow groundwater observation wells located in the eastern part of the Great Hungarian Plain with a temporal coverage of 50 years were scanned for periodicity. By taking the full-time interval as 100%, the presence of any period could be determined in percentages. With the complex hydrogeological/meteorological model developed in this study, non-periodic time intervals were found in the shallow groundwater levels. On the local scale, this phenomenon linked to drought conditions, and on a regional scale linked to the maxima of the regional air pressures in the Gulf of Genoa. The study documented an important link between shallow groundwater levels and climate variables/indices facilitating the necessary adaptation strategies on national and/or regional scales, which have to take into account the predictions of drought-related climatic conditions.Keywords: climate change, drought, groundwater periodicity, wavelet spectrum and coherence analyses
Procedia PDF Downloads 38517255 Adaption Model for Building Agile Pronunciation Dictionaries Using Phonemic Distance Measurements
Authors: Akella Amarendra Babu, Rama Devi Yellasiri, Natukula Sainath
Abstract:
Where human beings can easily learn and adopt pronunciation variations, machines need training before put into use. Also humans keep minimum vocabulary and their pronunciation variations are stored in front-end of their memory for ready reference, while machines keep the entire pronunciation dictionary for ready reference. Supervised methods are used for preparation of pronunciation dictionaries which take large amounts of manual effort, cost, time and are not suitable for real time use. This paper presents an unsupervised adaptation model for building agile and dynamic pronunciation dictionaries online. These methods mimic human approach in learning the new pronunciations in real time. A new algorithm for measuring sound distances called Dynamic Phone Warping is presented and tested. Performance of the system is measured using an adaptation model and the precision metrics is found to be better than 86 percent.Keywords: pronunciation variations, dynamic programming, machine learning, natural language processing
Procedia PDF Downloads 17617254 An ALM Matrix Completion Algorithm for Recovering Weather Monitoring Data
Authors: Yuqing Chen, Ying Xu, Renfa Li
Abstract:
The development of matrix completion theory provides new approaches for data gathering in Wireless Sensor Networks (WSN). The existing matrix completion algorithms for WSN mainly consider how to reduce the sampling number without considering the real-time performance when recovering the data matrix. In order to guarantee the recovery accuracy and reduce the recovery time consumed simultaneously, we propose a new ALM algorithm to recover the weather monitoring data. A lot of experiments have been carried out to investigate the performance of the proposed ALM algorithm by using different parameter settings, different sampling rates and sampling models. In addition, we compare the proposed ALM algorithm with some existing algorithms in the literature. Experimental results show that the ALM algorithm can obtain better overall recovery accuracy with less computing time, which demonstrate that the ALM algorithm is an effective and efficient approach for recovering the real world weather monitoring data in WSN.Keywords: wireless sensor network, matrix completion, singular value thresholding, augmented Lagrange multiplier
Procedia PDF Downloads 38417253 Biosorption Kinetics, Isotherms, and Thermodynamic Studies of Copper (II) on Spirogyra sp.
Authors: Diwan Singh
Abstract:
The ability of non-living Spirogyra sp. biomass for biosorption of copper(II) ions from aqueous solutions was explored. The effect of contact time, pH, initial copper ion concentration, biosorbent dosage and temperature were investigated in batch experiments. Both the Freundlich and Langmuir Isotherms were found applicable on the experimental data (R2>0.98). Qmax obtained from the Langmuir Isotherms was found to be 28.7 mg/g of biomass. The values of Gibbs free energy (ΔGº) and enthalpy change (ΔHº) suggest that the sorption is spontaneous and endothermic at 20ºC-40ºC.Keywords: biosorption, Spirogyra sp., contact time, pH, dose
Procedia PDF Downloads 42717252 On Four Models of a Three Server Queue with Optional Server Vacations
Authors: Kailash C. Madan
Abstract:
We study four models of a three server queueing system with Bernoulli schedule optional server vacations. Customers arriving at the system one by one in a Poisson process are provided identical exponential service by three parallel servers according to a first-come, first served queue discipline. In model A, all three servers may be allowed a vacation at one time, in Model B at the most two of the three servers may be allowed a vacation at one time, in model C at the most one server is allowed a vacation, and in model D no server is allowed a vacation. We study steady the state behavior of the four models and obtain steady state probability generating functions for the queue size at a random point of time for all states of the system. In model D, a known result for a three server queueing system without server vacations is derived.Keywords: a three server queue, Bernoulli schedule server vacations, queue size distribution at a random epoch, steady state
Procedia PDF Downloads 29617251 Time Series Regression with Meta-Clusters
Authors: Monika Chuchro
Abstract:
This paper presents a preliminary attempt to apply classification of time series using meta-clusters in order to improve the quality of regression models. In this case, clustering was performed as a method to obtain a subgroups of time series data with normal distribution from inflow into waste water treatment plant data which Composed of several groups differing by mean value. Two simple algorithms: K-mean and EM were chosen as a clustering method. The rand index was used to measure the similarity. After simple meta-clustering, regression model was performed for each subgroups. The final model was a sum of subgroups models. The quality of obtained model was compared with the regression model made using the same explanatory variables but with no clustering of data. Results were compared by determination coefficient (R2), measure of prediction accuracy mean absolute percentage error (MAPE) and comparison on linear chart. Preliminary results allows to foresee the potential of the presented technique.Keywords: clustering, data analysis, data mining, predictive models
Procedia PDF Downloads 46617250 E4D-MP: Time-Lapse Multiphysics Simulation and Joint Inversion Toolset for Large-Scale Subsurface Imaging
Authors: Zhuanfang Fred Zhang, Tim C. Johnson, Yilin Fang, Chris E. Strickland
Abstract:
A variety of geophysical techniques are available to image the opaque subsurface with little or no contact with the soil. It is common to conduct time-lapse surveys of different types for a given site for improved results of subsurface imaging. Regardless of the chosen survey methods, it is often a challenge to process the massive amount of survey data. The currently available software applications are generally based on the one-dimensional assumption for a desktop personal computer. Hence, they are usually incapable of imaging the three-dimensional (3D) processes/variables in the subsurface of reasonable spatial scales; the maximum amount of data that can be inverted simultaneously is often very small due to the capability limitation of personal computers. Presently, high-performance or integrating software that enables real-time integration of multi-process geophysical methods is needed. E4D-MP enables the integration and inversion of time-lapsed large-scale data surveys from geophysical methods. Using the supercomputing capability and parallel computation algorithm, E4D-MP is capable of processing data across vast spatiotemporal scales and in near real time. The main code and the modules of E4D-MP for inverting individual or combined data sets of time-lapse 3D electrical resistivity, spectral induced polarization, and gravity surveys have been developed and demonstrated for sub-surface imaging. E4D-MP provides capability of imaging the processes (e.g., liquid or gas flow, solute transport, cavity development) and subsurface properties (e.g., rock/soil density, conductivity) critical for successful control of environmental engineering related efforts such as environmental remediation, carbon sequestration, geothermal exploration, and mine land reclamation, among others.Keywords: gravity survey, high-performance computing, sub-surface monitoring, electrical resistivity tomography
Procedia PDF Downloads 15717249 Impact of Varying Malting and Fermentation Durations on Specific Chemical, Functional Properties, and Microstructural Behaviour of Pearl Millet and Sorghum Flour Using Response Surface Methodology
Authors: G. Olamiti; TK. Takalani; D. Beswa, AIO Jideani
Abstract:
The study investigated the effects of malting and fermentation times on some chemical, functional properties and microstructural behaviour of Agrigreen, Babala pearl millet cultivars and sorghum flours using response surface methodology (RSM). Central Composite Rotatable Design (CCRD) was performed on two independent variables: malting and fermentation times (h), at intervals of 24, 48, and 72, respectively. The results of dependent parameters such as pH, titratable acidity (TTA), Water absorption capacity (WAC), Oil absorption capacity (OAC), bulk density (BD), dispersibility and microstructural behaviour of the flours studied showed a significant difference in p < 0.05 upon malting and fermentation time. Babala flour exhibited a higher pH value at 4.78 at 48 h malted and 81.9 fermentation times. Agrigreen flour showed a higher TTA value at 0.159% at 81.94 h malted and 48 h fermentation times. WAC content was also higher in malted and fermented Babala flour at 2.37 ml g-1 for 81.94 h malted and 48 h fermentation time. Sorghum flour exhibited the least OAC content at 1.67 ml g-1 at 14 h malted and 48 h fermentation times. Agrigreen flour recorded the least bulk density, at 0.53 g ml-1 for 72 h malted and 24 h fermentation time. Sorghum flour exhibited a higher content of dispersibility, at 56.34%, after 24 h malted and 72 h fermented time. The response surface plots showed that increased malting and fermentation time influenced the dependent parameters. The microstructure behaviour of malting and fermentation times of pearl millet varieties and sorghum flours showed isolated, oval, spherical, or polygonal to smooth surfaces. The optimal processing conditions, such as malting and fermentation time for Agrigreen, were 32.24 h and 63.32 h; 35.18 h and 34.58 h for Babala; and 36.75 h and 47.88 h for sorghum with high desirability of 1.00. The validation of the optimum processing malting and fermentation times (h) on the dependent improved the experimented values. Food processing companies can use the study's findings to improve food processing and quality.Keywords: Pearl millet, malting, fermentation, microstructural behaviour
Procedia PDF Downloads 7317248 Improving the Run Times of Existing and Historical Demand Models Using Simple Python Scripting
Authors: Abhijeet Ostawal, Parmjit Lall
Abstract:
The run times for a large strategic model that we were managing had become too long leading to delays in project delivery, increased costs and loss in productivity. Software developers are continuously working towards developing more efficient tools by changing their algorithms and processes. The issue faced by our team was how do you apply the latest technologies on validated existing models which are based on much older versions of software that do not have the latest software capabilities. The multi-model transport model that we had could only be run in sequential assignment order. Recent upgrades to the software now allowed the assignment to be run in parallel, a concept called parallelization. Parallelization is a Python script working only within the latest version of the software. A full model transfer to the latest version was not possible due to time, budget and the potential changes in trip assignment. This article is to show the method to adapt and update the Python script in such a way that it can be used in older software versions by calling the latest version and then recalling the old version for assignment model without affecting the results. Through a process of trial-and-error run time savings of up to 30-40% have been achieved. Assignment results were maintained within the older version and through this learning process we’ve applied this methodology to other even older versions of the software resulting in huge time savings, more productivity and efficiency for both client and consultant.Keywords: model run time, demand model, parallelisation, python scripting
Procedia PDF Downloads 118