Search results for: accurate shape of cardiac actionpotential.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1776

Search results for: accurate shape of cardiac actionpotential.

246 Substantial Fatigue Similarity of a New Small-Scale Test Rig to Actual Wheel-Rail System

Authors: Meysam Naeimi, Zili Li, Roumen Petrov, Rolf Dollevoet, Jilt Sietsma, Jun Wu

Abstract:

The substantial similarity of fatigue mechanism in a new test rig for rolling contact fatigue (RCF) has been investigated. A new reduced-scale test rig is designed to perform controlled RCF tests in wheel-rail materials. The fatigue mechanism of the rig is evaluated in this study using a combined finite element-fatigue prediction approach. The influences of loading conditions on fatigue crack initiation have been studied. Furthermore, the effects of some artificial defects (squat-shape) on fatigue lives are examined. To simulate the vehicle-track interaction by means of the test rig, a threedimensional finite element (FE) model is built up. The nonlinear material behaviour of the rail steel is modelled in the contact interface. The results of FE simulations are combined with the critical plane concept to determine the material points with the greatest possibility of fatigue failure. Based on the stress-strain responses, by employing of previously postulated criteria for fatigue crack initiation (plastic shakedown and ratchetting), fatigue life analysis is carried out. The results are reported for various loading conditions and different defect sizes. Afterward, the cyclic mechanism of the test rig is evaluated from the operational viewpoint. The results of fatigue life predictions are compared with the expected number of cycles of the test rig by its cyclic nature. Finally, the estimative duration of the experiments until fatigue crack initiation is roughly determined.

Keywords: Fatigue, test rig, crack initiation, life, rail, squats.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2136
245 An Immersive Serious Game for Firefighting and Evacuation Training in Healthcare Facilities

Authors: Anass Rahouti, Guillaume Salze, Ruggiero Lovreglio, Sélim Datoussaïd

Abstract:

In healthcare facilities, training the staff for firefighting and evacuation in real buildings is very challenging due to the presence of a vulnerable population in such an environment. In a standard environment, traditional approaches, such as fire drills, are often used to train the occupants and provide them with information about fire safety procedures. However, those traditional approaches may be inappropriate for a vulnerable population and can be inefficient from an educational viewpoint as it is impossible to expose the occupants to scenarios similar to a real emergency. Immersive serious games could be used as an alternative to traditional approaches to overcome their limitations. Serious games are already being used in different safety domains such as fires, earthquakes and terror attacks for several building types (e.g., office buildings, train stations, tunnels, etc.). In this study, we developed an immersive serious game to improve the fire safety skills of staff in healthcare facilities. An accurate representation of the healthcare environment was built in Unity3D by including visual and audio stimuli inspired from those employed in commercial action games. The serious game is organised in three levels. In each of them, the trainee is presented with a specific fire emergency and s/he can perform protective actions (e.g., firefighting, helping non-ambulant occupants, etc.) or s/he can ignore the opportunity for action and continue the evacuation. In this paper, we describe all the steps required to develop such a prototype, as well as the key questions that need to be answered, to develop a serious game for firefighting and evacuation in healthcare facilities.

Keywords: Fire Safety, healthcare, serious game, training.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1144
244 A Metric-Set and Model Suggestion for Better Software Project Cost Estimation

Authors: Murat Ayyıldız, Oya Kalıpsız, Sırma Yavuz

Abstract:

Software project effort estimation is frequently seen as complex and expensive for individual software engineers. Software production is in a crisis. It suffers from excessive costs. Software production is often out of control. It has been suggested that software production is out of control because we do not measure. You cannot control what you cannot measure. During last decade, a number of researches on cost estimation have been conducted. The metric-set selection has a vital role in software cost estimation studies; its importance has been ignored especially in neural network based studies. In this study we have explored the reasons of those disappointing results and implemented different neural network models using augmented new metrics. The results obtained are compared with previous studies using traditional metrics. To be able to make comparisons, two types of data have been used. The first part of the data is taken from the Constructive Cost Model (COCOMO'81) which is commonly used in previous studies and the second part is collected according to new metrics in a leading international company in Turkey. The accuracy of the selected metrics and the data samples are verified using statistical techniques. The model presented here is based on Multi-Layer Perceptron (MLP). Another difficulty associated with the cost estimation studies is the fact that the data collection requires time and care. To make a more thorough use of the samples collected, k-fold, cross validation method is also implemented. It is concluded that, as long as an accurate and quantifiable set of metrics are defined and measured correctly, neural networks can be applied in software cost estimation studies with success

Keywords: Software Metrics, Software Cost Estimation, Neural Network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1924
243 Tool for Analysing the Sensitivity and Tolerance of Mechatronic Systems in Matlab GUI

Authors: Bohuslava Juhasova, Martin Juhas, Renata Masarova, Zuzana Sutova

Abstract:

The article deals with the tool in Matlab GUI form that is designed to analyse a mechatronic system sensitivity and tolerance. In the analysed mechatronic system, a torque is transferred from the drive to the load through a coupling containing flexible elements. Different methods of control system design are used. The classic form of the feedback control is proposed using Naslin method, modulus optimum criterion and inverse dynamics method. The cascade form of the control is proposed based on combination of modulus optimum criterion and symmetric optimum criterion. The sensitivity is analysed on the basis of absolute and relative sensitivity of system function to the change of chosen parameter value of the mechatronic system, as well as the control subsystem. The tolerance is analysed in the form of determining the range of allowed relative changes of selected system parameters in the field of system stability. The tool allows to analyse an influence of torsion stiffness, torsion damping, inertia moments of the motor and the load and controller(s) parameters. The sensitivity and tolerance are monitored in terms of the impact of parameter change on the response in the form of system step response and system frequency-response logarithmic characteristics. The Symbolic Math Toolbox for expression of the final shape of analysed system functions was used. The sensitivity and tolerance are graphically represented as 2D graph of sensitivity or tolerance of the system function and 3D/2D static/interactive graph of step/frequency response.

Keywords: Mechatronic systems, Matlab GUI, sensitivity, tolerance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2019
242 Orbit Determination Modeling with Graphical Demonstration

Authors: Assem M. F. Sallam, Ah. El-S. Makled

Abstract:

In this paper, there is an implementation, verification, and graphical demonstration of a software application, which can be used swiftly over different preliminary orbit determination methods. A passive orbit determination method is used in this study to determine the location of a satellite or a flying body. It is named a passive orbit determination because it depends on observation without the use of any aids (radio and laser) installed on satellite. In order to understand how these methods work and how their output is accurate when compared with available verification data, the built models help in knowing the different inputs used with each method. Output from the different orbit determination methods (Gibbs, Lambert, and Gauss) will be compared with each other and verified by the data obtained from Satellite Tool Kit (STK) application. A modified model including all of the orbit determination methods using the same input will be introduced to investigate different models output (orbital parameters) for the same input (azimuth, elevation, and time). Simulation software is implemented using MATLAB. A Graphical User Interface (GUI) application named OrDet is produced using the GUI of MATLAB. It includes all the available used inputs and it outputs the current Classical Orbital Elements (COE) of satellite under observation. Produced COE are then used to propagate for a complete revolution and plotted on a 3-D view. Modified model which uses an adapter to allow same input parameters, passes these parameters to the preliminary orbit determination methods under study. Result from all orbit determination methods yield exactly the same COE output, which shows the equality of concept in determination of satellite’s location, but with different numerical methods.

Keywords: Orbit determination, STK, MATLAB-GUI, satellite tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1514
241 Allometric Models for Biomass Estimation in Savanna Woodland Area, Niger State, Nigeria

Authors: Abdullahi Jibrin, Aishetu Abdulkadir

Abstract:

The development of allometric models is crucial to accurate forest biomass/carbon stock assessment. The aim of this study was to develop a set of biomass prediction models that will enable the determination of total tree aboveground biomass for savannah woodland area in Niger State, Nigeria. Based on the data collected through biometric measurements of 1816 trees and destructive sampling of 36 trees, five species specific and one site specific models were developed. The sample size was distributed equally between the five most dominant species in the study site (Vitellaria paradoxa, Irvingia gabonensis, Parkia biglobosa, Anogeissus leiocarpus, Pterocarpus erinaceous). Firstly, the equations were developed for five individual species. Secondly these five species were mixed and were used to develop an allometric equation of mixed species. Overall, there was a strong positive relationship between total tree biomass and the stem diameter. The coefficient of determination (R2 values) ranging from 0.93 to 0.99 P < 0.001 were realised for the models; with considerable low standard error of the estimates (SEE) which confirms that the total tree above ground biomass has a significant relationship with the dbh. F-test values for the biomass prediction models were also significant at p < 0.001 which indicates that the biomass prediction models are valid. This study recommends that for improved biomass estimates in the study site, the site specific biomass models should preferably be used instead of using generic models.

Keywords: Allometriy, biomass, carbon stock, model, regression equation, woodland, inventory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2740
240 Automated Monitoring System to Support Investigation of Contributing Factors of Work-Related Disorders and Accidents

Authors: Erika R. Chambriard, Sandro C. Izidoro, Davidson P. Mendes, Douglas E. V. Pires

Abstract:

Work-related illnesses and disorders have been a constant aspect of work. Although their nature has changed over time, from musculoskeletal disorders to illnesses related to psychosocial aspects of work, its impact on the life of workers remains significant. Despite significant efforts worldwide to protect workers, the disparity between changes in work legislation and actual benefit for workers’ health has been creating a significant economic burden for social security and health systems around the world. In this context, this study aims to propose, test and validate a modular prototype that allows for work environmental aspects to be assessed, monitored and better controlled. The main focus is also to provide a historical record of working conditions and the means for workers to obtain comprehensible and useful information regarding their work environment and legal limits of occupational exposure to different types of environmental variables, as means to improve prevention of work-related accidents and disorders. We show the developed prototype provides useful and accurate information regarding the work environmental conditions, validating them with standard occupational hygiene equipment. We believe the proposed prototype is a cost-effective and adequate approach to work environment monitoring that could help elucidate the links between work and occupational illnesses, and that different industry sectors, as well as developing countries, could benefit from its capabilities.

Keywords: Arduino prototyping, occupational health and hygiene, work environment, work-related disorders prevention.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1039
239 The Effects of Sodium Chloride in the Formation of Size and Shape of Gold (Au)Nanoparticles by Microwave-Polyol Method for Mercury Adsorption

Authors: Mawarni F. Mohamad, Khairul S.N. Kamarudin, Nik N.F.N.M. Fathilah, Mohamad M. Salleh

Abstract:

Mercury is a natural occurring element and present in various concentrations in the environment. Due to its toxic effects, it is desirable to research mercury sensitive materials to adsorb mercury. This paper describes the preparation of Au nanoparticles for mercury adsorption by using a microwave (MW)-polyol method in the presence of three different Sodium Chloride (NaCl) concentrations (10, 20 and 30 mM). Mixtures of spherical, triangular, octahedral, decahedral particles and 1-D product were obtained using this rapid method. Sizes and shapes was found strongly depend on the concentrations of NaCl. Without NaCl concentration, spherical, triangular plates, octahedral, decahedral nanoparticles and 1D product were produced. At the lower NaCl concentration (10 mM), spherical, octahedral and decahedral nanoparticles were present, while spherical and decahedral nanoparticles were preferentially form by using 20 mM of NaCl concentration. Spherical, triangular plates, octahedral and decahedral nanoparticles were obtained at the highest NaCl concentration (30 mM). The amount of mercury adsorbed using 20 ppm mercury solution is the highest (67.5 %) for NaCl concentration of 30 mM. The high yield of polygonal particles will increase the mercury adsorption. In addition, the adsorption of mercury is also due to the sizes of the particles. The sizes of particles become smaller with increasing NaCl concentrations (size ranges, 5- 16 nm) than those synthesized without addition of NaCl (size ranges 11-32 nm). It is concluded that NaCl concentrations affects the formation of sizes and shapes of Au nanoparticles thus affects the mercury adsorption.

Keywords: Adsorption, Au Nanoparticles, Mercury, SodiumChloride.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3608
238 The Role of Ideophones: Phonological and Morphological Characteristics in Literature

Authors: Cristina Bahón Arnaiz

Abstract:

Many Asian languages, such as Korean and Japanese, are well-known for their wide use of sound symbolic words or ideophones. This is a very particular characteristic which enriches its lexicon hugely. Ideophones are a class of sound symbolic words that utilize sound symbolism to express aspects, states, emotions, or conditions that can be experienced through the senses, such as shape, color, smell, action or movement. Ideophones have very particular characteristics in terms of sound symbolism and morphology, which distinguish them from other words. The phonological characteristics of ideophones are vowel ablaut or vowel gradation and consonant mutation. In the case of Korean, there are light vowels and dark vowels. Depending on the type of vowel that is used, the meaning will slightly change. Consonant mutation, also known as consonant ablaut, contributes to the level of intensity, emphasis, and volume of an expression. In addition to these phonological characteristics, there is one main morphological singularity, which is reduplication and it carries the meaning of continuity, repetition, intensity, emphasis, and plurality. All these characteristics play an important role in both linguistics and literature as they enhance the meaning of what is trying to be expressed with incredible semantic detail, expressiveness, and rhythm. The following study will analyze the ideophones used in a single paragraph of a Korean novel, which add incredible yet subtle detail to the meaning of the words, and advance the expressiveness and rhythm of the text. The results from analyzing one paragraph from a novel, after presenting the phonological and morphological characteristics of Korean ideophones, will evidence the important role that ideophones play in literature. 

Keywords: Ideophones, mimetic words, phonomimes, phenomimes, psychomimes, sound symbolism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1063
237 Bridge Health Monitoring: A Review

Authors: Mohammad Bakhshandeh

Abstract:

Structural Health Monitoring (SHM) is a crucial and necessary practice that plays a vital role in ensuring the safety and integrity of critical structures, and in particular, bridges. The continuous monitoring of bridges for signs of damage or degradation through Bridge Health Monitoring (BHM) enables early detection of potential problems, allowing for prompt corrective action to be taken before significant damage occurs. Although all monitoring techniques aim to provide accurate and decisive information regarding the remaining useful life, safety, integrity, and serviceability of bridges, understanding the development and propagation of damage is vital for maintaining uninterrupted bridge operation. Over the years, extensive research has been conducted on BHM methods, and experts in the field have increasingly adopted new methodologies. In this article, we provide a comprehensive exploration of the various BHM approaches, including sensor-based, non-destructive testing (NDT), model-based, and artificial intelligence (AI)-based methods. We also discuss the challenges associated with BHM, including sensor placement and data acquisition, data analysis and interpretation, cost and complexity, and environmental effects, through an extensive review of relevant literature and research studies. Additionally, we examine potential solutions to these challenges and propose future research ideas to address critical gaps in BHM.

Keywords: Structural health monitoring, bridge health monitoring, sensor-based methods, machine-learning algorithms, model-based techniques, sensor placement, data acquisition, data analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 141
236 Distributed Automation System Based Remote Monitoring of Power Quality Disturbance on LV Network

Authors: Emmanuel D. Buedi, K. O. Boateng, Griffith S. Klogo

Abstract:

Electrical distribution networks are prone to power quality disturbances originating from the complexity of the distribution network, mode of distribution (overhead or underground) and types of loads used by customers. Data on the types of disturbances present and frequency of occurrence is needed for economic evaluation and hence finding solution to the problem. Utility companies have resorted to using secondary power quality devices such as smart meters to help gather the required data. Even though this approach is easier to adopt, data gathered from these devices may not serve the required purpose, since the installation of these devices in the electrical network usually does not conform to available PQM placement methods. This paper presents a design of a PQM that is capable of integrating into an existing DAS infrastructure to take advantage of available placement methodologies. The monitoring component of the design is implemented and installed to monitor an existing LV network. Data from the monitor is analyzed and presented. A portion of the LV network of the Electricity Company of Ghana is modeled in MATLAB-Simulink and analyzed under various earth fault conditions. The results presented show the ability of the PQM to detect and analyze PQ disturbance such as voltage sag and overvoltage. By adopting a placement methodology and installing these nodes, utilities are assured of accurate and reliable information with respect to the quality of power delivered to consumers.

Keywords: Power quality, remote monitoring, distributed automation system, economic evaluation, LV network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1089
235 Power Transformers Insulation Material Investigations: Partial Discharge

Authors: Jalal M. Abdallah

Abstract:

There is a great problem in testing and investigations the reliability of different type of transformers insulation materials. It summarized in how to create and simulate the real conditions of working transformer and testing its insulation materials for Partial Discharge PD, typically as in the working mode. A lot of tests may give untrue results as the physical behavior of the insulation material differs under tests from its working condition. In this work, the real working conditions were simulated, and a large number of specimens have been tested. The investigations first stage, begin with choosing samples of different types of insulation materials (papers, pressboards, etc.). The second stage, the samples were dried in ovens at 105 C0and 0.01bar for 48 hours, and then impregnated with dried and gasless oil (the water content less than 6 ppm.) at 105 C0and 0.01bar for 48 hours, after so specimen cooling at room pressure and temperature for 24 hours. The third stage is investigating PD for the samples using ICM PD measuring device. After that, a continuous test on oil-impregnated insulation materials (paper, pressboards) was developed, and the phase resolved partial discharge pattern of PD signals was measured. The important of this work in providing the industrial sector with trusted high accurate measuring results based on real simulated working conditions. All the PD patterns (results) associated with a discharge produced in well-controlled laboratory condition. They compared with other previous and other laboratory results. In addition, the influence of different temperatures condition on the partial discharge activities was studied.

Keywords: Transformers, insulation materials, voids, partial discharge (PD).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1375
234 Development of Precise Ephemeris Generation Module for Thaichote Satellite Operations

Authors: Manop Aorpimai, Ponthep Navakitkanok

Abstract:

In this paper, the development of the ephemeris generation module used for the Thaichote satellite operations is presented. It is a vital part of the flight dynamics system, which comprises, the orbit determination, orbit propagation, event prediction and station-keeping maneouvre modules. In the generation of the spacecraft ephemeris data, the estimated orbital state vector from the orbit determination module is used as an initial condition. The equations of motion are then integrated forward in time to predict the satellite states. The higher geopotential harmonics, as well as other disturbing forces, are taken into account to resemble the environment in low-earth orbit. Using a highly accurate numerical integrator based on the Burlish-Stoer algorithm the ephemeris data can be generated for long-term predictions, by using a relatively small computation burden and short calculation time. Some events occurring during the prediction course that are related to the mission operations, such as the satellite’s rise/set viewed from the ground station, Earth and Moon eclipses, the drift in groundtrack as well as the drift in the local solar time of the orbital plane are all detected and reported. When combined with other modules to form a flight dynamics system, this application is aimed to be applied for the Thaichote satellite and successive Thailand’s Earth-observation missions. 

Keywords: Flight Dynamics System, Orbit Propagation, Satellite Ephemeris, Thailand’s Earth Observation Satellite.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3009
233 Feature Based Unsupervised Intrusion Detection

Authors: Deeman Yousif Mahmood, Mohammed Abdullah Hussein

Abstract:

The goal of a network-based intrusion detection system is to classify activities of network traffics into two major categories: normal and attack (intrusive) activities. Nowadays, data mining and machine learning plays an important role in many sciences; including intrusion detection system (IDS) using both supervised and unsupervised techniques. However, one of the essential steps of data mining is feature selection that helps in improving the efficiency, performance and prediction rate of proposed approach. This paper applies unsupervised K-means clustering algorithm with information gain (IG) for feature selection and reduction to build a network intrusion detection system. For our experimental analysis, we have used the new NSL-KDD dataset, which is a modified dataset for KDDCup 1999 intrusion detection benchmark dataset. With a split of 60.0% for the training set and the remainder for the testing set, a 2 class classifications have been implemented (Normal, Attack). Weka framework which is a java based open source software consists of a collection of machine learning algorithms for data mining tasks has been used in the testing process. The experimental results show that the proposed approach is very accurate with low false positive rate and high true positive rate and it takes less learning time in comparison with using the full features of the dataset with the same algorithm.

Keywords: Information Gain (IG), Intrusion Detection System (IDS), K-means Clustering, Weka.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2727
232 Urban Growth Analysis Using Multi-Temporal Satellite Images, Non-stationary Decomposition Methods and Stochastic Modeling

Authors: Ali Ben Abbes, ImedRiadh Farah, Vincent Barra

Abstract:

Remotely sensed data are a significant source for monitoring and updating databases for land use/cover. Nowadays, changes detection of urban area has been a subject of intensive researches. Timely and accurate data on spatio-temporal changes of urban areas are therefore required. The data extracted from multi-temporal satellite images are usually non-stationary. In fact, the changes evolve in time and space. This paper is an attempt to propose a methodology for changes detection in urban area by combining a non-stationary decomposition method and stochastic modeling. We consider as input of our methodology a sequence of satellite images I1, I2, … In at different periods (t = 1, 2, ..., n). Firstly, a preprocessing of multi-temporal satellite images is applied. (e.g. radiometric, atmospheric and geometric). The systematic study of global urban expansion in our methodology can be approached in two ways: The first considers the urban area as one same object as opposed to non-urban areas (e.g. vegetation, bare soil and water). The objective is to extract the urban mask. The second one aims to obtain a more knowledge of urban area, distinguishing different types of tissue within the urban area. In order to validate our approach, we used a database of Tres Cantos-Madrid in Spain, which is derived from Landsat for a period (from January 2004 to July 2013) by collecting two frames per year at a spatial resolution of 25 meters. The obtained results show the effectiveness of our method.

Keywords: Multi-temporal satellite image, urban growth, Non-stationarity, stochastic modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1473
231 Theoretical Modal Analysis of Freely and Simply Supported RC Slabs

Authors: M. S. Ahmed, F. A. Mohammad

Abstract:

This paper focuses on the dynamic behavior of reinforced concrete (RC) slabs. Therefore, the theoretical modal analysis was performed using two different types of boundary conditions. Modal analysis method is the most important dynamic analyses. The analysis would be modal case when there is no external force on the structure. By using this method in this paper, the effects of freely and simply supported boundary conditions on the frequencies and mode shapes of RC square slabs are studied. ANSYS software was employed to derive the finite element model to determine the natural frequencies and mode shapes of the slabs. Then, the obtained results through numerical analysis (finite element analysis) would be compared with the exact solution. The main goal of the research study is to predict how the boundary conditions change the behavior of the slab structures prior to performing experimental modal analysis. Based on the results, it is concluded that simply support boundary condition has obvious influence to increase the natural frequencies and change the shape of the mode when it is compared with freely supported boundary condition of slabs. This means that such support conditions have the direct influence on the dynamic behavior of the slabs. Thus, it is suggested to use free-free boundary condition in experimental modal analysis to precisely reflect the properties of the structure. By using free-free boundary conditions, the influence of poorly defined supports is interrupted.

Keywords: Natural frequencies, Mode shapes, Modal analysis, ANSYS software, RC slabs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3786
230 Estimation of Attenuation and Phase Delay in Driving Voltage Waveform of a Digital-Noiseless, Ultra-High-Speed Image Sensor

Authors: V. T. S. Dao, T. G. Etoh, C. Vo Le, H. D. Nguyen, K. Takehara, T. Akino, K. Nishi

Abstract:

Since 2004, we have been developing an in-situ storage image sensor (ISIS) that captures more than 100 consecutive images at a frame rate of 10 Mfps with ultra-high sensitivity as well as the video camera for use with this ISIS. Currently, basic research is continuing in an attempt to increase the frame rate up to 100 Mfps and above. In order to suppress electro-magnetic noise at such high frequency, a digital-noiseless imaging transfer scheme has been developed utilizing solely sinusoidal driving voltages. This paper presents highly efficient-yet-accurate expressions to estimate attenuation as well as phase delay of driving voltages through RC networks of an ultra-high-speed image sensor. Elmore metric for a fundamental RC chain is employed as the first-order approximation. By application of dimensional analysis to SPICE data, we found a simple expression that significantly improves the accuracy of the approximation. Similarly, another simple closed-form model to estimate phase delay through fundamental RC networks is also obtained. Estimation error of both expressions is much less than previous works, only less 2% for most of the cases . The framework of this analysis can be extended to address similar issues of other VLSI structures.

Keywords: Dimensional Analysis, ISIS, Digital-noiseless, RC network, Attenuation, Phase Delay, Elmore model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1425
229 Impact of Changes of the Conceptual Framework for Financial Reporting on the Indicators of the Financial Statement

Authors: Nadezhda Kvatashidze

Abstract:

The International Accounting Standards Board updated the conceptual framework for financial reporting. The main reason behind it is to resolve the tasks of the accounting, which are caused by the market development and business-transactions of a new economic content. Also, the investors call for higher transparency of information and responsibility for the results in order to make a more accurate risk assessment and forecast. All these make it necessary to further develop the conceptual framework for financial reporting so that the users get useful information. The market development and certain shortcomings of the conceptual framework revealed in practice require its reconsideration and finding new solutions. Some issues and concepts, such as disclosure and supply of information, its qualitative characteristics, assessment, and measurement uncertainty had to be supplemented and perfected. The criteria of recognition of certain elements (assets and liabilities) of reporting had to be updated, too and all this is set out in the updated edition of the conceptual framework for financial reporting, a comprehensive collection of concepts underlying preparation of the financial statement. The main objective of conceptual framework revision is to improve financial reporting and development of clear concepts package. This will support International Accounting Standards Board (IASB) to set common “Approach & Reflection” for similar transactions on the basis of mutually accepted concepts. As a result, companies will be able to develop coherent accounting policies for those transactions or events that are occurred from particular deals to which no standard is used or when standard allows choice of accounting policy.

Keywords: Conceptual framework, measurement basis, measurement uncertainty, neutrality, prudence, stewardship.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2424
228 A Cost Effective Approach to Develop Mid-size Enterprise Software Adopted the Waterfall Model

Authors: M. N. Hasnine, M. K. H. Chayon, M. M. Rahman

Abstract:

Organizational tendencies towards computer-based information processing have been observed noticeably in the third-world countries. Many enterprises are taking major initiatives towards computerized working environment because of massive benefits of computer-based information processing. However, designing and developing information resource management software for small and mid-size enterprises under budget costs and strict deadline is always challenging for software engineers. Therefore, we introduced an approach to design mid-size enterprise software by using the Waterfall model, which is one of the SDLC (Software Development Life Cycles), in a cost effective way. To fulfill research objectives, in this study, we developed mid-sized enterprise software named “BSK Management System” that assists enterprise software clients with information resource management and perform complex organizational tasks. Waterfall model phases have been applied to ensure that all functions, user requirements, strategic goals, and objectives are met. In addition, Rich Picture, Structured English, and Data Dictionary have been implemented and investigated properly in engineering manner. Furthermore, an assessment survey with 20 participants has been conducted to investigate the usability and performance of the proposed software. The survey results indicated that our system featured simple interfaces, easy operation and maintenance, quick processing, and reliable and accurate transactions.

Keywords: End-user Application Development, Enterprise Software Design, Information Resource Management, Usability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1912
227 A Supervised Learning Data Mining Approach for Object Recognition and Classification in High Resolution Satellite Data

Authors: Mais Nijim, Rama Devi Chennuboyina, Waseem Al Aqqad

Abstract:

Advances in spatial and spectral resolution of satellite images have led to tremendous growth in large image databases. The data we acquire through satellites, radars, and sensors consists of important geographical information that can be used for remote sensing applications such as region planning, disaster management. Spatial data classification and object recognition are important tasks for many applications. However, classifying objects and identifying them manually from images is a difficult task. Object recognition is often considered as a classification problem, this task can be performed using machine-learning techniques. Despite of many machine-learning algorithms, the classification is done using supervised classifiers such as Support Vector Machines (SVM) as the area of interest is known. We proposed a classification method, which considers neighboring pixels in a region for feature extraction and it evaluates classifications precisely according to neighboring classes for semantic interpretation of region of interest (ROI). A dataset has been created for training and testing purpose; we generated the attributes by considering pixel intensity values and mean values of reflectance. We demonstrated the benefits of using knowledge discovery and data-mining techniques, which can be on image data for accurate information extraction and classification from high spatial resolution remote sensing imagery.

Keywords: Remote sensing, object recognition, classification, data mining, waterbody identification, feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2019
226 Experimental and Numerical Study of the Effect of Lateral Wind on the Feeder Airship

Authors: A. Suñol, D. Vucinic, S.Vanlanduit, T. Markova, A. Aksenov, I. Moskalyov

Abstract:

Feeder is one of the airships of the Multibody Advanced Airship for Transport (MAAT) system, under development within the EU FP7 project. MAAT is based on a modular concept composed of two different parts that have the possibility to join; respectively they are the so-called Cruiser and Feeder, designed on the lighter than air principle. Feeder, also named ATEN (Airship Transport Elevator Network), is the smaller one which joins the bigger one, Cruiser, also named PTAH (Photovoltaic modular Transport Airship for High altitude),envisaged to happen at 15km altitude. During the MAAT design phase, the aerodynamic studies of the both airships and their interactions are analyzed. The objective of these studies is to understand the aerodynamic behavior of all the preselected configurations, as an important element in the overall MAAT system design. The most of these configurations are only simulated by CFD, while the most feasible one is experimentally analyzed in order to validate and thrust the CFD predictions. This paper presents the numerical and experimental investigation of the Feeder “conical like" shape configuration. The experiments are focused on the aerodynamic force coefficients and the pressure distribution over the Feeder outer surface, while the numerical simulation cover also the analysis of the velocity and pressure distribution. Finally, the wind tunnel experiment is compared with its CFD model in order to validate such specific simulations with respective experiments and to better understand the difference between the wind tunnel and in-flight circumstances.

Keywords: MAAT project Feeder, CFD simulations, wind tunnel experiments, lateral wind influence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2537
225 Verification of Sr-90 Determination in Water and Spruce Needles Samples Using IAEA-TEL-2016-04 ALMERA Proficiency Test Samples

Authors: S. Visetpotjanakit, N. Nakkaew

Abstract:

Determination of 90Sr in environmental samples has been widely developed with several radioanlytical methods and radiation measurement techniques since 90Sr is one of the most hazardous radionuclides produced from nuclear reactors. Liquid extraction technique using di-(2-ethylhexyl) phosphoric acid (HDEHP) to separate and purify 90Y and Cherenkov counting using liquid scintillation counter to determine 90Y in secular equilibrium to 90Sr was developed and performed at our institute, the Office of Atoms for Peace. The approach is inexpensive, non-laborious, and fast to analyse 90Sr in environmental samples. To validate our analytical performance for the accurate and precise criteria, determination of 90Sr using the IAEA-TEL-2016-04 ALMERA proficiency test samples were performed for statistical evaluation. The experiment used two spiked tap water samples and one naturally contaminated spruce needles sample from Austria collected shortly after the Chernobyl accident. Results showed that all three analyses were successfully passed in terms of both accuracy and precision criteria, obtaining “Accepted” statuses. The two water samples obtained the measured results of 15.54 Bq/kg and 19.76 Bq/kg, which had relative bias 5.68% and -3.63% for the Maximum Acceptable Relative Bias (MARB) 15% and 20%, respectively. And the spruce needles sample obtained the measured results of 21.04 Bq/kg, which had relative bias 23.78% for the MARB 30%. These results confirm our analytical performance of 90Sr determination in water and spruce needles samples using the same developed method.

Keywords: ALMERA proficiency test, Cherenkov counting, determination of 90Sr, environmental samples.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 818
224 Advanced Model for Calculation of the Neutral Axis Shifting and the Wall Thickness Distribution in Rotary Draw Bending Processes

Authors: B. Engel, H. Hassan

Abstract:

Rotary draw bending is a method which is being used in tube forming. In the tube bending process, the neutral axis moves towards the inner arc and the wall thickness distribution changes for tube’s cross section. Thinning takes place in the outer arc of the tube (extrados) due to the stretching of the material, whereas thickening occurs in the inner arc of the tube (intrados) due to the comparison of the material. The calculations of the wall thickness distribution, neutral axis shifting, and strain distribution have not been accurate enough, so far. The previous model (the geometrical model) describes the neutral axis shifting and wall thickness distribution. The geometrical of the tube, bending radius and bending angle are considered in the geometrical model, while the influence of the material properties of the tube forming are ignored. The advanced model is a modification of the previous model using material properties that depends on the correction factor. The correction factor is a purely empirically determined factor. The advanced model was compared with the Finite element simulation (FE simulation) using a different bending factor (Bf =bending radius/ diameter of the tube), wall thickness (Wf = diameter of the tube/ wall thickness), and material properties (strain hardening exponent). Finite element model of rotary draw bending has been performed in PAM-TUBE program (version: 2012). Results from the advanced model resemble the FE simulation and the experimental test.

Keywords: Rotary draw bending, material properties, neutral axis shifting, wall thickness distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3875
223 Modeling of PZ in Haunch Connections Systems

Authors: Peyman Shadman Heidari, Roohollah Ahmady Jazany, Mahmood Reza Mehran, Pouya Shadman Heidari, Mohammad khorasani

Abstract:

Modeling of Panel Zone (PZ) seismic behavior, because of its role in overall ductility and lateral stiffness of steel moment frames, has been considered a challenge for years. There are some studies regarding the effects of different doubler plates thicknesses and geometric properties of PZ on its seismic behavior. However, there is not much investigation on the effects of number of provided continuity plates in case of presence of one triangular haunch, two triangular haunches and rectangular haunch (T shape haunches) for exterior columns. In this research first detailed finite element models of 12tested connection of SAC joint venture were created and analyzed then obtained cyclic behavior backbone curves of these models besides other FE models for similar tests were used for neural network training. Then seismic behavior of these data is categorized according to continuity plate-s arrangements and differences in type of haunches. PZ with one-sided haunches have little plastic rotation. As the number of continuity plates increases due to presence of two triangular haunches (four continuity plate), there will be no plastic rotation, in other words PZ behaves in its elastic range. In the case of rectangular haunch, PZ show more plastic rotation in comparison with one-sided triangular haunch and especially double-sided triangular haunches. Moreover, the models that will be presented in case of triangular one-sided and double- sided haunches and rectangular haunches as a result of this study seem to have a proper estimation of PZ seismic behavior.

Keywords: Continuity plate, FE models, Neural network, Panel zone, Plastic rotation, Rectangular haunch, Seismic behavior

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1973
222 Performance Comparison of Different Regression Methods for a Polymerization Process with Adaptive Sampling

Authors: Florin Leon, Silvia Curteanu

Abstract:

Developing complete mechanistic models for polymerization reactors is not easy, because complex reactions occur simultaneously; there is a large number of kinetic parameters involved and sometimes the chemical and physical phenomena for mixtures involving polymers are poorly understood. To overcome these difficulties, empirical models based on sampled data can be used instead, namely regression methods typical of machine learning field. They have the ability to learn the trends of a process without any knowledge about its particular physical and chemical laws. Therefore, they are useful for modeling complex processes, such as the free radical polymerization of methyl methacrylate achieved in a batch bulk process. The goal is to generate accurate predictions of monomer conversion, numerical average molecular weight and gravimetrical average molecular weight. This process is associated with non-linear gel and glass effects. For this purpose, an adaptive sampling technique is presented, which can select more samples around the regions where the values have a higher variation. Several machine learning methods are used for the modeling and their performance is compared: support vector machines, k-nearest neighbor, k-nearest neighbor and random forest, as well as an original algorithm, large margin nearest neighbor regression. The suggested method provides very good results compared to the other well-known regression algorithms.

Keywords: Adaptive sampling, batch bulk methyl methacrylate polymerization, large margin nearest neighbor regression, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1360
221 A Development of the Multiple Intelligences Measurement of Elementary Students

Authors: Chaiwat Waree

Abstract:

This research aims at development of the Multiple Intelligences Measurement of Elementary Students. The structural accuracy test and normality establishment are based on the Multiple Intelligences Theory of Gardner. This theory consists of eight aspects namely linguistics, logic and mathematics, visual-spatial relations, body and movement, music, human relations, self-realization/selfunderstanding and nature. The sample used in this research consists of elementary school students (aged between 5-11 years). The size of the sample group was determined by Yamane Table. The group has 2,504 students. Multistage Sampling was used. Basic statistical analysis and construct validity testing were done using confirmatory factor analysis. The research can be summarized as follows; 1. Multiple Intelligences Measurement consisting of 120 items is content-accurate. Internal consistent reliability according to the method of Kuder-Richardson of the whole Multiple Intelligences Measurement equals .91. The difficulty of the measurement test is between .39-.83. Discrimination is between .21-.85. 2). The Multiple Intelligences Measurement has construct validity in a good range, that is 8 components and all 120 test items have statistical significance level at .01. Chi-square value equals 4357.7; p=.00 at the degree of freedom of 244 and Goodness of Fit Index equals 1.00. Adjusted Goodness of Fit Index equals .92. Comparative Fit Index (CFI) equals .68. Root Mean Squared Residual (RMR) equals 0.064 and Root Mean Square Error of Approximation equals 0.82. 3). The normality of the Multiple Intelligences Measurement is categorized into 3 levels. Those with high intelligence are those with percentiles of more than 78. Those with moderate/medium intelligence are those with percentiles between 24 and 77.9. Those with low intelligence are those with percentiles from 23.9 downwards.

Keywords: Multiple Intelligences, Measurement, Elementary Students.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2923
220 The Evaluation of Gravity Anomalies Based on Global Models by Land Gravity Data

Authors: M. Yilmaz, I. Yilmaz, M. Uysal

Abstract:

The Earth system generates different phenomena that are observable at the surface of the Earth such as mass deformations and displacements leading to plate tectonics, earthquakes, and volcanism. The dynamic processes associated with the interior, surface, and atmosphere of the Earth affect the three pillars of geodesy: shape of the Earth, its gravity field, and its rotation. Geodesy establishes a characteristic structure in order to define, monitor, and predict of the whole Earth system. The traditional and new instruments, observables, and techniques in geodesy are related to the gravity field. Therefore, the geodesy monitors the gravity field and its temporal variability in order to transform the geodetic observations made on the physical surface of the Earth into the geometrical surface in which positions are mathematically defined. In this paper, the main components of the gravity field modeling, (Free-air and Bouguer) gravity anomalies are calculated via recent global models (EGM2008, EIGEN6C4, and GECO) over a selected study area. The model-based gravity anomalies are compared with the corresponding terrestrial gravity data in terms of standard deviation (SD) and root mean square error (RMSE) for determining the best fit global model in the study area at a regional scale in Turkey. The least SD (13.63 mGal) and RMSE (15.71 mGal) were obtained by EGM2008 for the Free-air gravity anomaly residuals. For the Bouguer gravity anomaly residuals, EIGEN6C4 provides the least SD (8.05 mGal) and RMSE (8.12 mGal). The results indicated that EIGEN6C4 can be a useful tool for modeling the gravity field of the Earth over the study area.

Keywords: Free-air gravity anomaly, Bouguer gravity anomaly, global model, land gravity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 932
219 Curing Time Effect on Behavior of Cement Treated Marine Clay

Authors: H. W. Xiao, F. H. Lee

Abstract:

Cement stabilization has been widely used for improving the strength and stiffness of soft clayey soils. Cement treated soil specimens used to investigate the stress-strain behaviour in the laboratory study are usually cured for 7 days. This paper examines the effects of curing time on the strength and stress strain behaviour of cement treated marine clay under triaxial loading condition. Laboratory-prepared cement treated Singapore marine clay with different mix proportion S-C-W (soil solid-cement solid-water) and curing time (7 days to 180 days) was investigated through conducting unconfined compressive strength test and triaxial test. The results show that the curing time has a significant effect on the unconfined compressive strength u q , isotropic compression behaviour and stress strain behaviour. Although the primary yield loci of the cement treated soil specimens with the same mix proportion expand with curing time, they are very narrowly banded and have nearly the same shape after being normalized by isotropic compression primary stress ' py p . The isotropic compression primary yield stress ' py p was shown to be linearly related to unconfined compressive strength u q for specimens with different curing time and mix proportion. The effect of curing time on the hardening behaviour will diminish with consolidation stress higher than isotropic compression primary yield stress but its damping rate is dependent on the cement content.

Keywords: Cement treated soil, curing time effect, hardening behaviour, isotropic compression primary yield stress, unconfined compressive strength.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3856
218 Numerical Modeling of Determination of in situ Rock Mass Deformation Modulus Using the Plate Load Test

Authors: A. Khodabakhshi, A. Mortazavi

Abstract:

Accurate determination of rock mass deformation modulus, as an important design parameter, is one of the most controversial issues in most engineering projects. A 3D numerical model of standard plate load test (PLT) using the FLAC3D code was carried to investigate the mechanism governing the test process. Five objectives were the focus of this study. The first goal was to employ 3D modeling in the interpretation of PLT conducted at the Bazoft dam site, Iran. The second objective was to investigate the effect of displacements measuring depth from the loading plates on the calculated moduli. The magnitude of rock mass deformation modulus calculated from PLT depends on anchor depth, and in practice, this may be a cause of error in the selection of realistic deformation modulus for the rock mass. The third goal of the study was to investigate the effect of testing plate diameter on the calculated modulus. Moreover, a comparison of the calculated modulus from ISRM formula, numerical modeling and calculated modulus from the actual PLT carried out at right abutment of the Bazoft dam site was another objective of the study. Finally, the effect of plastic strains on the calculated moduli in each of the loading-unloading cycles for three loading plates was investigated. The geometry, material properties, and boundary conditions on the constructed 3D model were selected based on the in-situ conditions of PLT at Bazoft dam site. A good agreement was achieved between numerical model results and the field tests results.

Keywords: Deformation modulus, numerical model, plate loading test, rock mass.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 724
217 Time Effective Structural Frequency Response Testing with Oblique Impact

Authors: Khoo Shin Yee, Lian Yee Cheng, Ong Zhi Chao, Zubaidah Ismail, Siamak Noroozi

Abstract:

Structural frequency response testing is accurate in identifying the dynamic characteristic of a machinery structure. In practical perspective, conventional structural frequency response testing such as experimental modal analysis with impulse technique (also known as “impulse testing”) has limitation especially on its long acquisition time. The high acquisition time is mainly due to the redundancy procedure where the engineer has to repeatedly perform the test in 3 directions, namely the axial-, horizontal- and vertical-axis, in order to comprehensively define the dynamic behavior of a 3D structure. This is unfavorable to numerous industries where the downtime cost is high. This study proposes to reduce the testing time by using oblique impact. Theoretically, a single oblique impact can induce significant vibration responses and vibration modes in all the 3 directions. Hence, the acquisition time with the implementation of the oblique impulse technique can be reduced by a factor of three (i.e. for a 3D dynamic system). This study initiates an experimental investigation of impulse testing with oblique excitation. A motor-driven test rig has been used for the testing purpose. Its dynamic characteristic has been identified using the impulse testing with the conventional normal impact and the proposed oblique impact respectively. The results show that the proposed oblique impulse testing is able to obtain all the desired natural frequencies in all 3 directions and thus providing a feasible solution for a fast and time effective way of conducting the impulse testing.

Keywords: Frequency response function, impact testing, modal analysis, oblique angle, oblique impact.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 893