Search results for: retardation time
4541 Economical and Technical Analysis of Urban Transit System Selection Using TOPSIS Method According to Constructional and Operational Aspects
Authors: Ali Abdi Kordani, Meysam Rooyintan, Sid Mohammad Boroomandrad
Abstract:
Nowadays, one the most important problems in megacities is public transportation and satisfying citizens from this system in order to decrease the traffic congestions and air pollution. Accordingly, to improve the transit passengers and increase the travel safety, new transportation systems such as Bus Rapid Transit (BRT), tram, and monorail have expanded that each one has different merits and demerits. That is why comparing different systems for a systematic selection of public transportation systems in a big city like Tehran, which has numerous problems in terms of traffic and pollution, is essential. In this paper, it is tried to investigate the advantages and feasibility of using monorail, tram and BRT systems, which are widely used in most of megacities in all over the world. In Tehran, by using SPSS statistical analysis software and TOPSIS method, these three modes are compared to each other and their results will be assessed. Experts, who are experienced in the transportation field, answer the prepared matrix questionnaire to select each public transportation mode (tram, monorail, and BRT). The results according to experts’ judgments represent that monorail has the first priority, Tram has the second one, and BRT has the third one according to the considered indices like execution costs, wasting time, depreciation, pollution, operation costs, travel time, passenger satisfaction, benefit to cost ratio and traffic congestion.Keywords: Bus Rapid Transit, Costs, Monorail, Pollution, Tram.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6774540 Effect of Progressive Type-I Right Censoring on Bayesian Statistical Inference of Simple Step–Stress Acceleration Life Testing Plan under Weibull Life Distribution
Authors: Saleem Z. Ramadan
Abstract:
This paper discusses the effects of using progressive Type-I right censoring on the design of the Simple Step Accelerated Life testing using Bayesian approach for Weibull life products under the assumption of cumulative exposure model. The optimization criterion used in this paper is to minimize the expected pre-posterior variance of the Pth percentile time of failures. The model variables are the stress changing time and the stress value for the first step. A comparison between the conventional and the progressive Type-I right censoring is provided. The results have shown that the progressive Type-I right censoring reduces the cost of testing on the expense of the test precision when the sample size is small. Moreover, the results have shown that using strong priors or large sample size reduces the sensitivity of the test precision to the censoring proportion. Hence, the progressive Type-I right censoring is recommended in these cases as progressive Type-I right censoring reduces the cost of the test and doesn't affect the precision of the test a lot. Moreover, the results have shown that using direct or indirect priors affects the precision of the test.
Keywords: Reliability, Accelerated life testing, Cumulative exposure model, Bayesian estimation, Progressive Type-I censoring, Weibull distribution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21614539 Effects of Thermal Radiation and Magnetic Field on Unsteady Stretching Permeable Sheet in Presence of Free Stream Velocity
Authors: Phool Singh, Ashok Jangid, N. S. Tomer, Deepa Sinha
Abstract:
The aim of this paper is to investigate twodimensional unsteady flow of a viscous incompressible fluid about stagnation point on permeable stretching sheet in presence of time dependent free stream velocity. Fluid is considered in the influence of transverse magnetic field in the presence of radiation effect. Rosseland approximation is use to model the radiative heat transfer. Using time-dependent stream function, partial differential equations corresponding to the momentum and energy equations are converted into non-linear ordinary differential equations. Numerical solutions of these equations are obtained by using Runge-Kutta Fehlberg method with the help of Newton-Raphson shooting technique. In the present work the effect of unsteadiness parameter, magnetic field parameter, radiation parameter, stretching parameter and the Prandtl number on flow and heat transfer characteristics have been discussed. Skin-friction coefficient and Nusselt number at the sheet are computed and discussed. The results reported in the paper are in good agreement with published work in literature by other researchers.
Keywords: Magneto hydrodynamics, stretching sheet, thermal radiation, unsteady flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22674538 Solar Radiation Time Series Prediction
Authors: Cameron Hamilton, Walter Potter, Gerrit Hoogenboom, Ronald McClendon, Will Hobbs
Abstract:
A model was constructed to predict the amount of solar radiation that will make contact with the surface of the earth in a given location an hour into the future. This project was supported by the Southern Company to determine at what specific times during a given day of the year solar panels could be relied upon to produce energy in sufficient quantities. Due to their ability as universal function approximators, an artificial neural network was used to estimate the nonlinear pattern of solar radiation, which utilized measurements of weather conditions collected at the Griffin, Georgia weather station as inputs. A number of network configurations and training strategies were utilized, though a multilayer perceptron with a variety of hidden nodes trained with the resilient propagation algorithm consistently yielded the most accurate predictions. In addition, a modeled direct normal irradiance field and adjacent weather station data were used to bolster prediction accuracy. In later trials, the solar radiation field was preprocessed with a discrete wavelet transform with the aim of removing noise from the measurements. The current model provides predictions of solar radiation with a mean square error of 0.0042, though ongoing efforts are being made to further improve the model’s accuracy.
Keywords: Artificial Neural Networks, Resilient Propagation, Solar Radiation, Time Series Forecasting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27634537 Optimization of Ethanol Fermentation from Pineapple Peel Extract Using Response Surface Methodology (RSM)
Authors: Nadya Hajar, Zainal, S., Atikah, O., Tengku Elida, T. Z. M.
Abstract:
Ethanol has been known for a long time, being perhaps the oldest product obtained through traditional biotechnology fermentation. Agriculture waste as substrate in fermentation is vastly discussed as alternative to replace edible food and utilization of organic material. Pineapple peel, highly potential source as substrate is a by-product of the pineapple processing industry. Bio-ethanol from pineapple (Ananas comosus) peel extract was carried out by controlling fermentation without any treatment. Saccharomyces ellipsoides was used as inoculum in this fermentation process as it is naturally found at the pineapple skin. In this study, the capability of Response Surface Methodology (RSM) for optimization of ethanol production from pineapple peel extract using Saccharomyces ellipsoideus in batch fermentation process was investigated. Effect of five test variables in a defined range of inoculum concentration 6- 14% (v/v), pH (4.0-6.0), sugar concentration (14-22°Brix), temperature (24-32°C) and time of incubation (30-54 hrs) on the ethanol production were evaluated. Data obtained from experiment were analyzed with RSM of MINITAB Software (Version 15) whereby optimum ethanol concentration of 8.637% (v/v) was determined. The optimum condition of 14% (v/v) inoculum concentration, pH 6, 22°Brix, 26°C and 30hours of incubation. The significant regression equation or model at the 5% level with correlation value of 99.96% was also obtained.Keywords: Bio-ethanol, pineapple peel extract, Response Surface Methodology (RSM), Saccharomyces ellipsoideus.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 60954536 Computational Modeling in Strategic Marketing
Authors: Petr Cernohorsky, Jan Voracek
Abstract:
Well-developed strategic marketing planning is the essential prerequisite for establishment of the right and unique competitive advantage. Typical market, however, is a heterogeneous and decentralized structure with natural involvement of individual or group subjectivity and irrationality. These features cannot be fully expressed with one-shot rigorous formal models based on, e.g. mathematics, statistics or empirical formulas. We present an innovative solution, extending the domain of agent based computational economics towards the concept of hybrid modeling in service provider and consumer market such as telecommunications. The behavior of the market is described by two classes of agents - consumer and service provider agents - whose internal dynamics are fundamentally different. Customers are rather free multi-state structures, adjusting behavior and preferences quickly in accordance with time and changing environment. Producers, on the contrary, are traditionally structured companies with comparable internal processes and specific managerial policies. Their business momentum is higher and immediate reaction possibilities limited. This limitation underlines importance of proper strategic planning as the main process advising managers in time whether to continue with more or less the same business or whether to consider the need for future structural changes that would ensure retention of existing customers or acquisition of new ones.Keywords: Agent-based computational economics, hybrid modeling, strategic marketing, system dynamics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16414535 Research Regarding Resistance Characteristics of Biscuits Assortment Using Cone Penetrometer
Authors: G.–A. Constantin, G. Voicu, E.–M. Stefan, P. Tudor, G. Paraschiv, M.–G. Munteanu
Abstract:
In the activity of handling and transport of food products, the products may be subjected to mechanical stresses that may lead to their deterioration by deformation, breaking, or crushing. This is the case for biscuits, regardless of their type (gluten-free or sugary), the addition of ingredients or flour from which they are made. However, gluten-free biscuits have a higher mechanical resistance to breakage or crushing compared to easily shattered sugar biscuits (especially those for children). The paper presents the results of the experimental evaluation of the texture for four varieties of commercial biscuits, using the penetrometer equipped with needle cone at five different additional weights on the cone-rod. The assortments of biscuits tested in the laboratory were Petit Beurre, Picnic, and Maia (all three manufactured by RoStar, Romania) and Sultani diet biscuits, manufactured by Eti Burcak Sultani (Turkey, in packs of 138 g). For the four varieties of biscuits and the five additional weights (50, 77, 100, 150 and 177 g), the experimental data obtained were subjected to regression analysis in the MS Office Excel program, using Velon's relationship (h = a∙ln(t) + b). The regression curves were analysed comparatively in order to identify possible differences and to highlight the variation of the penetration depth h, in relation to the time t. Based on the penetration depth between two-time intervals (every 5 seconds), the curves of variation of the penetration speed in relation to time were then drawn. It was found that Velon's law verifies the experimental data for all assortments of biscuits and for all five additional weights. The correlation coefficient R2 had in most of the analysed cases values over 0.850. The values recorded for the penetration depth were framed, in general, within 45-55 p.u. (penetrometric units) at an additional mass of 50 g, respectively between 155-168 p.u., at an additional mass of 177 g, at Petit Beurre biscuits. For Sultani diet biscuits, the values of the penetration depth were within the limits of 32-35 p.u., at an additional weight of 50 g and between 80-114 p.u., at an additional weight of 177g. The data presented in the paper can be used by both operators on the manufacturing technology flow, as well as by the traders of these food products, in order to establish the most efficient parametric of the working regimes (when packaging and handling).
Keywords: Biscuits resistance/texture, penetration depth, penetration velocity, sharp pin penetrometer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6294534 Addressing Scalability Issues of Named Entity Recognition Using Multi-Class Support Vector Machines
Authors: Mona Soliman Habib
Abstract:
This paper explores the scalability issues associated with solving the Named Entity Recognition (NER) problem using Support Vector Machines (SVM) and high-dimensional features. The performance results of a set of experiments conducted using binary and multi-class SVM with increasing training data sizes are examined. The NER domain chosen for these experiments is the biomedical publications domain, especially selected due to its importance and inherent challenges. A simple machine learning approach is used that eliminates prior language knowledge such as part-of-speech or noun phrase tagging thereby allowing for its applicability across languages. No domain-specific knowledge is included. The accuracy measures achieved are comparable to those obtained using more complex approaches, which constitutes a motivation to investigate ways to improve the scalability of multiclass SVM in order to make the solution more practical and useable. Improving training time of multi-class SVM would make support vector machines a more viable and practical machine learning solution for real-world problems with large datasets. An initial prototype results in great improvement of the training time at the expense of memory requirements.Keywords: Named entity recognition, support vector machines, language independence, bioinformatics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16904533 Purity Monitor Studies in Medium Liquid Argon TPC
Authors: I. Badhrees
Abstract:
This paper is an attempt to describe some of the results that had been found through a journey of study in the field of particle physics. This study consists of two parts, one about the measurement of the cross section of the decay of the Z particle in two electrons, and the other deals with the measurement of the cross section of the multi-photon absorption process using a beam of Laser in the Liquid Argon Time Projection Chamber.
The first part of the paper concerns the results based on the analysis of a data sample containing 8120 ee candidates to reconstruct the mass of the Z particle for each event where each event has an ee pair with PT(e) > 20GeV, and η(e) < 2.5. Monte Carlo templates of the reconstructed Z particle were produced as a function of the Z mass scale. The distribution of the reconstructed Z mass in the data was compared to the Monte Carlo templates, where the total cross section is calculated to be equal to 1432pb.
The second part concerns the Liquid Argon Time Projection Chamber, LAr TPC, the results of the interaction of the UV Laser, Nd-YAG with λ= 266mm, with LAr and through the study of the multi-photon ionization process as a part of the R&D at Bern University. The main result of this study was the cross section of the process of the multi-photon ionization process of the LAr, σe = 1.24±0.10stat±0.30sys.10 -56cm4.
Keywords: ATLAS, CERN, KACST, LArTPC, Particle Physics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17124532 Efficient Program Slicing Algorithms for Measuring Functional Cohesion and Parallelism
Authors: Jehad Al Dallal
Abstract:
Program slicing is the task of finding all statements in a program that directly or indirectly influence the value of a variable occurrence. The set of statements that can affect the value of a variable at some point in a program is called a program slice. In several software engineering applications, such as program debugging and measuring program cohesion and parallelism, several slices are computed at different program points. In this paper, algorithms are introduced to compute all backward and forward static slices of a computer program by traversing the program representation graph once. The program representation graph used in this paper is called Program Dependence Graph (PDG). We have conducted an experimental comparison study using 25 software modules to show the effectiveness of the introduced algorithm for computing all backward static slices over single-point slicing approaches in computing the parallelism and functional cohesion of program modules. The effectiveness of the algorithm is measured in terms of time execution and number of traversed PDG edges. The comparison study results indicate that using the introduced algorithm considerably saves the slicing time and effort required to measure module parallelism and functional cohesion.
Keywords: Backward slicing, cohesion measure, forward slicing, parallelism measure, program dependence graph, program slicing, static slicing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14494531 An FPGA Implementation of Intelligent Visual Based Fall Detection
Authors: Peng Shen Ong, Yoong Choon Chang, Chee Pun Ooi, Ettikan K. Karuppiah, Shahirina Mohd Tahir
Abstract:
Falling has been one of the major concerns and threats to the independence of the elderly in their daily lives. With the worldwide significant growth of the aging population, it is essential to have a promising solution of fall detection which is able to operate at high accuracy in real-time and supports large scale implementation using multiple cameras. Field Programmable Gate Array (FPGA) is a highly promising tool to be used as a hardware accelerator in many emerging embedded vision based system. Thus, it is the main objective of this paper to present an FPGA-based solution of visual based fall detection to meet stringent real-time requirements with high accuracy. The hardware architecture of visual based fall detection which utilizes the pixel locality to reduce memory accesses is proposed. By exploiting the parallel and pipeline architecture of FPGA, our hardware implementation of visual based fall detection using FGPA is able to achieve a performance of 60fps for a series of video analytical functions at VGA resolutions (640x480). The results of this work show that FPGA has great potentials and impacts in enabling large scale vision system in the future healthcare industry due to its flexibility and scalability.Keywords: Fall detection, FPGA, hardware implementation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24654530 Nonlinear Transformation of Laser Generated Ultrasonic Pulses in Geomaterials
Authors: Elena B. Cherepetskaya, Alexander A. Karabutov, Natalia B. Podymova, Ivan Sas
Abstract:
Nonlinear evolution of broadband ultrasonic pulses passed through the rock specimens is studied using the apparatus “GEOSCAN-02M”. Ultrasonic pulses are excited by the pulses of Qswitched Nd:YAG laser with the time duration of 10 ns and with the energy of 260 mJ. This energy can be reduced to 20 mJ by some light filters. The laser beam radius did not exceed 5 mm. As a result of the absorption of the laser pulse in the special material – the optoacoustic generator–the pulses of longitudinal ultrasonic waves are excited with the time duration of 100 ns and with the maximum pressure amplitude of 10 MPa. The immersion technique is used to measure the parameters of these ultrasonic pulses passed through a specimen, the immersion liquid is distilled water. The reference pulse passed through the cell with water has the compression and the rarefaction phases. The amplitude of the rarefaction phase is five times lower than that of the compression phase. The spectral range of the reference pulse reaches 10 MHz. The cubic-shaped specimens of the Karelian gabbro are studied with the rib length 3 cm. The ultimate strength of the specimens by the uniaxial compression is (300±10) MPa. As the reference pulse passes through the area of the specimen without cracks the compression phase decreases and the rarefaction one increases due to diffraction and scattering of ultrasound, so the ratio of these phases becomes 2.3:1. After preloading some horizontal cracks appear in the specimens. Their location is found by one-sided scanning of the specimen using the backward mode detection of the ultrasonic pulses reflected from the structure defects. Using the computer processing of these signals the images are obtained of the cross-sections of the specimens with cracks. By the increase of the reference pulse amplitude from 0.1 MPa to 5 MPa the nonlinear transformation of the ultrasonic pulse passed through the specimen with horizontal cracks results in the decrease by 2.5 times of the amplitude of the rarefaction phase and in the increase of its duration by 2.1 times. By the increase of the reference pulse amplitude from 5 MPa to 10 MPa the time splitting of the phases is observed for the bipolar pulse passed through the specimen. The compression and rarefaction phases propagate with different velocities. These features of the powerful broadband ultrasonic pulses passed through the rock specimens can be described by the hysteresis model of Preisach- Mayergoyz and can be used for the location of cracks in the optically opaque materials.Keywords: Cracks, geological materials, nonlinear evolution of ultrasonic pulses, rock.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18954529 An Experimental Study on the Optimum Installation of Fire Detector for Early Stage Fire Detecting in Rack-Type Warehouses
Authors: Ki Ok Choi, Sung Ho Hong, Dong Suck Kim, Don Mook Choi
Abstract:
Rack type warehouses are different from general buildings in the kinds, amount, and arrangement of stored goods, so the fire risk of rack type warehouses is different from those buildings. The fire pattern of rack type warehouses is different in combustion characteristic and storing condition of stored goods. The initial fire burning rate is different in the surface condition of materials, but the running time of fire is closely related with the kinds of stored materials and stored conditions. The stored goods of the warehouse are consisted of diverse combustibles, combustible liquid, and so on. Fire detection time may be delayed because the residents are less than office and commercial buildings. If fire detectors installed in rack type warehouses are inadaptable, the fire of the warehouse may be the great fire because of delaying of fire detection. In this paper, we studied what kinds of fire detectors are optimized in early detecting of rack type warehouse fire by real-scale fire tests. The fire detectors used in the tests are rate of rise type, fixed type, photo electric type, and aspirating type detectors. We considered optimum fire detecting method in rack type warehouses suggested by the response characteristic and comparative analysis of the fire detectors.
Keywords: Fire detector, rack, response characteristic, warehouse.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9844528 A Neural Network Control for Voltage Balancing in Three-Phase Electric Power System
Authors: Dana M. Ragab, Jasim A. Ghaeb
Abstract:
The three-phase power system suffers from different challenging problems, e.g. voltage unbalance conditions at the load side. The voltage unbalance usually degrades the power quality of the electric power system. Several techniques can be considered for load balancing including load reconfiguration, static synchronous compensator and static reactive power compensator. In this work an efficient neural network is designed to control the unbalanced condition in the Aqaba-Qatrana-South Amman (AQSA) electric power system. It is designed for highly enhanced response time of the reactive compensator for voltage balancing. The neural network is developed to determine the appropriate set of firing angles required for the thyristor-controlled reactor to balance the three load voltages accurately and quickly. The parameters of AQSA power system are considered in the laboratory model, and several test cases have been conducted to test and validate the proposed technique capabilities. The results have shown a high performance of the proposed Neural Network Control (NNC) technique for correcting the voltage unbalance conditions at three-phase load based on accuracy and response time.
Keywords: Three-phase power system, reactive power control, voltage unbalance factor, neural network, power quality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9954527 Data Recording for Remote Monitoring of Autonomous Vehicles
Authors: Rong-Terng Juang
Abstract:
Autonomous vehicles offer the possibility of significant benefits to social welfare. However, fully automated cars might not be going to happen in the near further. To speed the adoption of the self-driving technologies, many governments worldwide are passing laws requiring data recorders for the testing of autonomous vehicles. Currently, the self-driving vehicle, (e.g., shuttle bus) has to be monitored from a remote control center. When an autonomous vehicle encounters an unexpected driving environment, such as road construction or an obstruction, it should request assistance from a remote operator. Nevertheless, large amounts of data, including images, radar and lidar data, etc., have to be transmitted from the vehicle to the remote center. Therefore, this paper proposes a data compression method of in-vehicle networks for remote monitoring of autonomous vehicles. Firstly, the time-series data are rearranged into a multi-dimensional signal space. Upon the arrival, for controller area networks (CAN), the new data are mapped onto a time-data two-dimensional space associated with the specific CAN identity. Secondly, the data are sampled based on differential sampling. Finally, the whole set of data are encoded using existing algorithms such as Huffman, arithmetic and codebook encoding methods. To evaluate system performance, the proposed method was deployed on an in-house built autonomous vehicle. The testing results show that the amount of data can be reduced as much as 1/7 compared to the raw data.
Keywords: Autonomous vehicle, data recording, remote monitoring, controller area network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13524526 Effects of Various Wavelet Transforms in Dynamic Analysis of Structures
Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar
Abstract:
Time history dynamic analysis of structures is considered as an exact method while being computationally intensive. Filtration of earthquake strong ground motions applying wavelet transform is an approach towards reduction of computational efforts, particularly in optimization of structures against seismic effects. Wavelet transforms are categorized into continuum and discrete transforms. Since earthquake strong ground motion is a discrete function, the discrete wavelet transform is applied in the present paper. Wavelet transform reduces analysis time by filtration of non-effective frequencies of strong ground motion. Filtration process may be repeated several times while the approximation induces more errors. In this paper, strong ground motion of earthquake has been filtered once applying each wavelet. Strong ground motion of Northridge earthquake is filtered applying various wavelets and dynamic analysis of sampled shear and moment frames is implemented. The error, regarding application of each wavelet, is computed based on comparison of dynamic response of sampled structures with exact responses. Exact responses are computed by dynamic analysis of structures applying non-filtered strong ground motion.
Keywords: Wavelet transform, computational error, computational duration, strong ground motion data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13744525 Multi-Factor Optimization Method through Machine Learning in Building Envelope Design: Focusing on Perforated Metal Façade
Authors: Jinwooung Kim, Jae-Hwan Jung, Seong-Jun Kim, Sung-Ah Kim
Abstract:
Because the building envelope has a significant impact on the operation and maintenance stage of the building, designing the facade considering the performance can improve the performance of the building and lower the maintenance cost of the building. In general, however, optimizing two or more performance factors confronts the limits of time and computational tools. The optimization phase typically repeats infinitely until a series of processes that generate alternatives and analyze the generated alternatives achieve the desired performance. In particular, as complex geometry or precision increases, computational resources and time are prohibitive to find the required performance, so an optimization methodology is needed to deal with this. Instead of directly analyzing all the alternatives in the optimization process, applying experimental techniques (heuristic method) learned through experimentation and experience can reduce resource waste. This study proposes and verifies a method to optimize the double envelope of a building composed of a perforated panel using machine learning to the design geometry and quantitative performance. The proposed method is to achieve the required performance with fewer resources by supplementing the existing method which cannot calculate the complex shape of the perforated panel.
Keywords: Building envelope, machine learning, perforated metal, multi-factor optimization, façade.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12244524 Suppression of Narrowband Interference in Impulse Radio Based High Data Rate UWB WPAN Communication System Using NLOS Channel Model
Authors: Bikramaditya Das, Susmita Das
Abstract:
Study on suppression of interference in time domain equalizers is attempted for high data rate impulse radio (IR) ultra wideband communication system. The narrow band systems may cause interference with UWB devices as it is having very low transmission power and the large bandwidth. SRAKE receiver improves system performance by equalizing signals from different paths. This enables the use of SRAKE receiver techniques in IRUWB systems. But Rake receiver alone fails to suppress narrowband interference (NBI). A hybrid SRake-MMSE time domain equalizer is proposed to overcome this by taking into account both the effect of the number of rake fingers and equalizer taps. It also combats intersymbol interference. A semi analytical approach and Monte-Carlo simulation are used to investigate the BER performance of SRAKEMMSE receiver on IEEE 802.15.3a UWB channel models. Study on non-line of sight indoor channel models (both CM3 and CM4) illustrates that bit error rate performance of SRake-MMSE receiver with NBI performs better than that of Rake receiver without NBI. We show that for a MMSE equalizer operating at high SNR-s the number of equalizer taps plays a more significant role in suppressing interference.
Keywords: IR-UWB, UWB, IEEE 802.15.3a, NBI, data rate, bit error rate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16914523 Two Scenarios for Ultra-Light Overhead Conveyor System in Logistics Applications
Authors: Batin Latif Aylak, Bernd Noche
Abstract:
Overhead conveyor systems are in use in many installations around the world, meeting the widest range of applications possible. Overhead conveyor systems are particularly preferred in automotive industry but also at post offices. Overhead conveyor systems must always be integrated with a logistical process by finding the best way for a cheaper material flow in order to guarantee precise and fast workflows. With their help, any transport can take place without wasting ground and space, without excessive company capacity, lost or damaged products, erroneous delivery, endless travels and without wasting time. Ultra-light overhead conveyor systems are rope-based conveying systems with individually driven vehicles. The vehicles can move automatically on the rope and this can be realized by energy and signals. Crossings are realized by switches. Ultra-light overhead conveyor systems provide optimal material flow, which produces profit and saves time. This article introduces two new ultra-light overhead conveyor designs in logistics and explains their components. According to the explanation of the components, scenarios are created by means of their technical characteristics. The scenarios are visualized with the help of CAD software. After that, assumptions are made for application area. According to these assumptions scenarios are visualized. These scenarios help logistics companies achieve lower development costs as well as quicker market maturity.
Keywords: Logistics, material flow, overhead conveyor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19964522 Jeffrey's Prior for Unknown Sinusoidal Noise Model via Cramer-Rao Lower Bound
Authors: Samuel A. Phillips, Emmanuel A. Ayanlowo, Rasaki O. Olanrewaju, Olayode Fatoki
Abstract:
This paper employs the Jeffrey's prior technique in the process of estimating the periodograms and frequency of sinusoidal model for unknown noisy time variants or oscillating events (data) in a Bayesian setting. The non-informative Jeffrey's prior was adopted for the posterior trigonometric function of the sinusoidal model such that Cramer-Rao Lower Bound (CRLB) inference was used in carving-out the minimum variance needed to curb the invariance structure effect for unknown noisy time observational and repeated circular patterns. An average monthly oscillating temperature series measured in degree Celsius (0C) from 1901 to 2014 was subjected to the posterior solution of the unknown noisy events of the sinusoidal model via Markov Chain Monte Carlo (MCMC). It was not only deduced that two minutes period is required before completing a cycle of changing temperature from one particular degree Celsius to another but also that the sinusoidal model via the CRLB-Jeffrey's prior for unknown noisy events produced a miniature posterior Maximum A Posteriori (MAP) compare to a known noisy events.
Keywords: Cramer-Rao Lower Bound (CRLB), Jeffrey's prior, Sinusoidal, Maximum A Posteriori (MAP), Markov Chain Monte Carlo (MCMC), Periodograms.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6584521 Malware Beaconing Detection by Mining Large-scale DNS Logs for Targeted Attack Identification
Authors: Andrii Shalaginov, Katrin Franke, Xiongwei Huang
Abstract:
One of the leading problems in Cyber Security today is the emergence of targeted attacks conducted by adversaries with access to sophisticated tools. These attacks usually steal senior level employee system privileges, in order to gain unauthorized access to confidential knowledge and valuable intellectual property. Malware used for initial compromise of the systems are sophisticated and may target zero-day vulnerabilities. In this work we utilize common behaviour of malware called ”beacon”, which implies that infected hosts communicate to Command and Control servers at regular intervals that have relatively small time variations. By analysing such beacon activity through passive network monitoring, it is possible to detect potential malware infections. So, we focus on time gaps as indicators of possible C2 activity in targeted enterprise networks. We represent DNS log files as a graph, whose vertices are destination domains and edges are timestamps. Then by using four periodicity detection algorithms for each pair of internal-external communications, we check timestamp sequences to identify the beacon activities. Finally, based on the graph structure, we infer the existence of other infected hosts and malicious domains enrolled in the attack activities.Keywords: Malware detection, network security, targeted attack.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 61094520 Automation of the Maritime UAV Command, Control, Navigation Operations, Simulated in Real-Time Using Kinect Sensor: A Feasibility Study
Authors: Regius Asiimwe, Amir Anvar
Abstract:
This paper describes the process used in the automation of the Maritime UAV commands using the Kinect sensor. The AR Drone is a Quadrocopter manufactured by Parrot [1] to be controlled using the Apple operating systems such as iPhones and Ipads. However, this project uses the Microsoft Kinect SDK and Microsoft Visual Studio C# (C sharp) software, which are compatible with Windows Operating System for the automation of the navigation and control of the AR drone. The navigation and control software for the Quadrocopter runs on a windows 7 computer. The project is divided into two sections; the Quadrocopter control system and the Kinect sensor control system. The Kinect sensor is connected to the computer using a USB cable from which commands can be sent to and from the Kinect sensors. The AR drone has Wi-Fi capabilities from which it can be connected to the computer to enable transfer of commands to and from the Quadrocopter. The project was implemented in C#, a programming language that is commonly used in the automation systems. The language was chosen because there are more libraries already established in C# for both the AR drone and the Kinect sensor. The study will contribute toward research in automation of systems using the Quadrocopter and the Kinect sensor for navigation involving a human operator in the loop. The prototype created has numerous applications among which include the inspection of vessels such as ship, airplanes and areas that are not accessible by human operators.Keywords: UAV, AR drone, Kinect Sensors, Automation, Real time, C sharp, Microsoft Kinect SDK.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29314519 Comparison of Finite Difference Schemes for Water Flow in Unsaturated Soils
Authors: H. Taheri Shahraiyni, B. Ataie Ashtiani
Abstract:
Flow movement in unsaturated soil can be expressed by a partial differential equation, named Richards equation. The objective of this study is the finding of an appropriate implicit numerical solution for head based Richards equation. Some of the well known finite difference schemes (fully implicit, Crank Nicolson and Runge-Kutta) have been utilized in this study. In addition, the effects of different approximations of moisture capacity function, convergence criteria and time stepping methods were evaluated. Two different infiltration problems were solved to investigate the performance of different schemes. These problems include of vertical water flow in a wet and very dry soils. The numerical solutions of two problems were compared using four evaluation criteria and the results of comparisons showed that fully implicit scheme is better than the other schemes. In addition, utilizing of standard chord slope method for approximation of moisture capacity function, automatic time stepping method and difference between two successive iterations as convergence criterion in the fully implicit scheme can lead to better and more reliable results for simulation of fluid movement in different unsaturated soils.Keywords: Finite Difference methods, Richards equation, fullyimplicit, Crank-Nicolson, Runge-Kutta.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23764518 Overview of Multi-Chip Alternatives for 2.5D and 3D Integrated Circuit Packagings
Authors: Ching-Feng Chen, Ching-Chih Tsai
Abstract:
With the size of the transistor gradually approaching the physical limit, it challenges the persistence of Moore’s Law due to such issues of the short channel effect and the development of the high numerical aperture (NA) lithography equipment. In the context of the ever-increasing technical requirements of portable devices and high-performance computing (HPC), relying on the law continuation to enhance the chip density will no longer support the prospects of the electronics industry. Weighing the chip’s power consumption-performance-area-cost-cycle time to market (PPACC) is an updated benchmark to drive the evolution of the advanced wafer nanometer (nm). The advent of two and half- and three-dimensional (2.5 and 3D)- Very-Large-Scale Integration (VLSI) packaging based on Through Silicon Via (TSV) technology has updated the traditional die assembly methods and provided the solution. This overview investigates the up-to-date and cutting-edge packaging technologies for 2.5D and 3D integrated circuits (IC) based on the updated transistor structure and technology nodes. We conclude that multi-chip solutions for 2.5D and 3D IC packaging can prolong Moore’s Law.
Keywords: Moore’s Law, High Numerical Aperture, Power Consumption-Performance-Area-Cost-Cycle Time to Market, PPACC, 2.5 and 3D-Very-Large-Scale Integration Packaging, Through Silicon Vi.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2284517 Comparison of Number of Waves Surfed and Duration Using Global Positioning System and Inertial Sensors
Authors: J. Madureira, R. Lagido, I. Sousa
Abstract:
Surf is an increasingly popular sport and its performance evaluation is often qualitative. This work aims at using a smartphone to collect and analyze the GPS and inertial sensors data in order to obtain quantitative metrics of the surfing performance. Two approaches are compared for detection of wave rides, computing the number of waves rode in a surfing session, the starting time of each wave and its duration. The first approach is based on computing the velocity from the Global Positioning System (GPS) signal and finding the velocity thresholds that allow identifying the start and end of each wave ride. The second approach adds information from the Inertial Measurement Unit (IMU) of the smartphone, to the velocity thresholds obtained from the GPS unit, to determine the start and end of each wave ride. The two methods were evaluated using GPS and IMU data from two surfing sessions and validated with similar metrics extracted from video data collected from the beach. The second method, combining GPS and IMU data, was found to be more accurate in determining the number of waves, start time and duration. This paper shows that it is feasible to use smartphones for quantification of performance metrics during surfing. In particular, detection of the waves rode and their duration can be accurately determined using the smartphone GPS and IMU.
Keywords: Inertial Measurement Unit (IMU), Global Positioning System (GPS), smartphone, surfing performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16554516 Material Analysis for Temple Painting Conservation in Taiwan
Authors: Chen-Fu Wang, Lin-Ya Kung
Abstract:
For traditional painting materials, the artisan used to combine the pigments with different binders to create colors. As time goes by, the materials used for painting evolved from natural to chemical materials. The vast variety of ingredients used in chemical materials has complicated restoration work; it makes conservation work more difficult. Conservation work also becomes harder when the materials cannot be easily identified; therefore, it is essential that we take a more scientific approach to assist in conservation work. Paintings materials are high molecular weight polymer, and their analysis is very complicated as well other contamination such as smoke and dirt can also interfere with the analysis of the material. The current methods of composition analysis of painting materials include Fourier transform infrared spectroscopy (FT-IR), mass spectrometer, Raman spectroscopy, X-ray diffraction spectroscopy (XRD), each of which has its own limitation. In this study, FT-IR was used to analyze the components of the paint coating. We have taken the most commonly seen materials as samples and deteriorated it. The aged information was then used for the database to exam the temple painting materials. By observing the FT-IR changes over time, we can tell all of the painting materials will be deteriorated by the UV light, but only the speed of its degradation had some difference. From the deterioration experiment, the acrylic resin resists better than the others. After collecting the painting materials aging information on FT-IR, we performed some test on the paintings on the temples. It was found that most of the artisan used tune-oil for painting materials, and some other paintings used chemical materials. This method is now working successfully on identifying the painting materials. However, the method is destructive and high cost. In the future, we will work on the how to know the painting materials more efficiently.
Keywords: Temple painting, painting material, conservation, FT-IR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12754515 Can Exams Be Shortened? Using a New Empirical Approach to Test in Finance Courses
Authors: Eric S. Lee, Connie Bygrave, Jordan Mahar, Naina Garg, Suzanne Cottreau
Abstract:
Marking exams is universally detested by lecturers. Final exams in many higher education courses often last 3.0 hrs. Do exams really need to be so long? Can we justifiably reduce the number of questions on them? Surprisingly few have researched these questions, arguably because of the complexity and difficulty of using traditional methods. To answer these questions empirically, we used a new approach based on three key elements: Use of an unusual variation of a true experimental design, equivalence hypothesis testing, and an expanded set of six psychometric criteria to be met by any shortened exam if it is to replace a current 3.0-hr exam (reliability, validity, justifiability, number of exam questions, correspondence, and equivalence). We compared student performance on each official 3.0-hr exam with that on five shortened exams having proportionately fewer questions (2.5, 2.0, 1.5, 1.0, and 0.5 hours) in a series of four experiments conducted in two classes in each of two finance courses (224 students in total). We found strong evidence that, in these courses, shortening of final exams to 2.0 hrs was warranted on all six psychometric criteria. Shortening these exams by one hour should result in a substantial one-third reduction in lecturer time and effort spent marking, lower student stress, and more time for students to prepare for other exams. Our approach provides a relatively simple, easy-to-use methodology that lecturers can use to examine the effect of shortening their own exams.
Keywords: Exam length, psychometric criteria, synthetic experimental designs, test length.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15034514 Defining a Semantic Web-based Framework for Enabling Automatic Reasoning on CIM-based Management Platforms
Authors: Fernando Alonso, Rafael Fernandez, Sonia Frutos, Javier Soriano
Abstract:
CIM is the standard formalism for modeling management information developed by the Distributed Management Task Force (DMTF) in the context of its WBEM proposal, designed to provide a conceptual view of the managed environment. In this paper, we propose the inclusion of formal knowledge representation techniques, based on Description Logics (DLs) and the Web Ontology Language (OWL), in CIM-based conceptual modeling, and then we examine the benefits of such a decision. The proposal is specified as a CIM metamodel level mapping to a highly expressive subset of DLs capable of capturing all the semantics of the models. The paper shows how the proposed mapping provides CIM diagrams with precise semantics and can be used for automatic reasoning about the management information models, as a design aid, by means of newgeneration CASE tools, thanks to the use of state-of-the-art automatic reasoning systems that support the proposed logic and use algorithms that are sound and complete with respect to the semantics. Such a CASE tool framework has been developed by the authors and its architecture is also introduced. The proposed formalization is not only useful at design time, but also at run time through the use of rational autonomous agents, in response to a need recently recognized by the DMTF.Keywords: CIM, Knowledge-based Information Models, OntologyLanguages, OWL, Description Logics, Integrated Network Management, Intelligent Agents, Automatic Reasoning Techniques.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15564513 Prediction on Housing Price Based on Deep Learning
Authors: Li Yu, Chenlu Jiao, Hongrun Xin, Yan Wang, Kaiyang Wang
Abstract:
In order to study the impact of various factors on the housing price, we propose to build different prediction models based on deep learning to determine the existing data of the real estate in order to more accurately predict the housing price or its changing trend in the future. Considering that the factors which affect the housing price vary widely, the proposed prediction models include two categories. The first one is based on multiple characteristic factors of the real estate. We built Convolution Neural Network (CNN) prediction model and Long Short-Term Memory (LSTM) neural network prediction model based on deep learning, and logical regression model was implemented to make a comparison between these three models. Another prediction model is time series model. Based on deep learning, we proposed an LSTM-1 model purely regard to time series, then implementing and comparing the LSTM model and the Auto-Regressive and Moving Average (ARMA) model. In this paper, comprehensive study of the second-hand housing price in Beijing has been conducted from three aspects: crawling and analyzing, housing price predicting, and the result comparing. Ultimately the best model program was produced, which is of great significance to evaluation and prediction of the housing price in the real estate industry.
Keywords: Deep learning, convolutional neural network, LSTM, housing prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 49904512 A Review on Factors Influencing Implementation of Secure Software Development Practices
Authors: Sri Lakshmi Kanniah, Mohd Naz’ri Mahrin
Abstract:
More and more businesses and services are depending on software to run their daily operations and business services. At the same time, cyber-attacks are becoming more covert and sophisticated, posing threats to software. Vulnerabilities exist in the software due to the lack of security practices during the phases of software development. Implementation of secure software development practices can improve the resistance to attacks. Many methods, models and standards for secure software development have been developed. However, despite the efforts, they still come up against difficulties in their deployment and the processes are not institutionalized. There is a set of factors that influence the successful deployment of secure software development processes. In this study, the methodology and results from a systematic literature review of factors influencing the implementation of secure software development practices is described. A total of 44 primary studies were analysed as a result of the systematic review. As a result of the study, a list of twenty factors has been identified. Some of factors that affect implementation of secure software development practices are: Involvement of the security expert, integration between security and development team, developer’s skill and expertise, development time and communication between stakeholders. The factors were further classified into four categories which are institutional context, people and action, project content and system development process. The results obtained show that it is important to take into account organizational, technical and people issues in order to implement secure software development initiatives.
Keywords: Secure software development, software development, software security, systematic literature review.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2493