Search results for: limitations of UI based test automation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 35751

Search results for: limitations of UI based test automation

31281 The Effect of Exposure to High Noise Level on the Performance and Rate of Error in Manual Activities

Authors: Zahra Zamanian, Alireza Zamanian, Jafar Hasanzadeh

Abstract:

Background: Unwanted sound, as one of the most important physical factors in the majority of production units, imposes a great number of problems on the industrial workers. Sound is one of the environmental factors which can cause physical as well as psychological damages and also affects the individuals’ performance and productivity. Therefore, the present study aimed to determine the effect of noise exposure on human performance. Methods: The present study assessed the effect of noise on the performance of 50 students of Shiraz University of Medical Sciences (25 males and 25 females) at the sound pressures of 70, 90, and 110 dB by using two factors of physical features and the creation of different conditions of sound pressure source as well as applying Two-Arm coordination Test. Results: The results of the present study revealed no significant difference between male and female subjects as well as different conditions of creating sound pressure regarding the length of performance (p> 0.05). In addition, as the sound pressure increased, the length of performance increased, as well. According to the results, no significant difference was found between the performance at 70 and 90 dB. On the other hand, the performance at 110 dB was significantly different from the performance at 70 and 90 dB (p<0.05 and p<0.001). Conclusion: In general, as the sound pressure increases, the performance decreases which results in a considerable increase in the individuals’ rate of error.

Keywords: physical factors, two-arm coordination test, Shiraz University of Medical Sciences, noise

Procedia PDF Downloads 307
31280 Evaluation of the Efficiency of Intelligent Systems in Traffic Congestion Pricing Schemes in Urban Streets

Authors: Saeed Sayyad Hagh Shomar

Abstract:

Traffic congestion pricing as one of the demand management strategies constrains expenditure to network users so that it helps reduction in traffic congestion and environment pollution like air pollution. Despite the development of congestion pricing schemes for traffic in our country, the matters of traditional toll collection, drivers’ waste of time and delay in traffic are still widespread. Electronic toll collection as a part of the intelligent transportation system provides the possibility of collecting tolls without car-stop and traffic disruption. Unlike the satisfying outcomes of using intelligent systems in congestion pricing schemes, implementation costs and technological problems are the barriers in these schemes. In this research first, a variety of electronic pay toll systems and their components are introduced then their functional usage is discussed. In the following, by analyzing and comparing the barriers, limitations and advantages, the selection criteria of intelligent systems are described and the results show that the choice of the best technology depends on the various parameters which, by examining them, it is concluded that in a long-term run and by providing the necessary conditions, DSRC technology as the main system in the schemes and ANPR as a major backup system of the main one can be employed.

Keywords: congestion pricing, electronic toll collection, intelligent systems, technology, traffic

Procedia PDF Downloads 612
31279 Video Based Ambient Smoke Detection By Detecting Directional Contrast Decrease

Authors: Omair Ghori, Anton Stadler, Stefan Wilk, Wolfgang Effelsberg

Abstract:

Fire-related incidents account for extensive loss of life and material damage. Quick and reliable detection of occurring fires has high real world implications. Whereas a major research focus lies on the detection of outdoor fires, indoor camera-based fire detection is still an open issue. Cameras in combination with computer vision helps to detect flames and smoke more quickly than conventional fire detectors. In this work, we present a computer vision-based smoke detection algorithm based on contrast changes and a multi-step classification. This work accelerates computer vision-based fire detection considerably in comparison with classical indoor-fire detection.

Keywords: contrast analysis, early fire detection, video smoke detection, video surveillance

Procedia PDF Downloads 449
31278 Experiences on the Application of WIKI Based Coursework in a Fourth-Year Engineering Module

Authors: D. Hassell, D. De Focatiis

Abstract:

This paper presents work on the application of wiki based coursework for a fourth-year engineering module delivered as part of both a MEng and MSc programme in Chemical Engineering. The module was taught with an equivalent structure simultaneously on two separate campuses, one in the United Kingdom (UK) and one in Malaysia, and the subsequent results were compared. Student feedback was sought via questionnaires, with 45 respondents from the UK and 49 from Malaysia. Results include discussion on; perceived difficulty; student enjoyment and experiences; differences between MEng and MSc students; differences between cohorts on different campuses. The response of students to the use of wiki-based coursework was found to vary based on their experiences and background, with UK students being generally more positive on its application than those in Malaysia.

Keywords: engineering education, student differences, student learning, web based coursework

Procedia PDF Downloads 302
31277 Analysis of the Accuracy of Earth Movement with Drone Surveys

Authors: Raúl Pereda García, Julio Manuel de Luis Ruiz, Elena Castillo López, Rubén Pérez Álvarez, Felipe Piña García

Abstract:

New technologies for the capture of point clouds have experienced a great advance in recent years. In this way, its use has been extended in geomatics, providing measurement solutions that have been popularized without there being, many times, a detailed study of its accuracy. This research focuses on the study of the viability of topographic works with drones incorporating different sensors sensitive to the visible spectrum. The fundamentals have been applied to a road, located in Cantabria (Spain), where a platform extension and the reform of a riprap were being constructed. A total of six flights were made during two months, all of them with GPS as part of the photogrammetric process, and the results were contrasted with those measured with total station. The obtained results show that the choice of the camera and the planning of the flight have an important impact on the accuracy. In fact, the representations with a level of detail corresponding to 1/1000 scale are admissible, depending on the existing vegetation, and obtaining better results in the area of the riprap. This set of techniques is, therefore, suitable for the control of earthworks in road works but with certain limitations which are exposed in this paper.

Keywords: drone, earth movement control, global position system, surveying technology.

Procedia PDF Downloads 189
31276 Comparative Study in Dentinal Tubuli Occlusion Using Bioglass and Copper-Bromide Laser

Authors: Sun Woo Lee, Tae Bum Lee, Yoon Hwa Park, Yoo Jeong Kim

Abstract:

Cervical dentinal hypersensitivity (CDH) affects 8-30% of adults and nearly 85% of perio-treated patients. Various treatment schemes have been applied for treating CDH, among them being fluoride application, laser irradiation, and, recently, bioglass. The purpose of this study was to investigate the influence of bioglass, copper-bromide (Cu-Br) laser irradiation and their combination on dentinal tubule occlusion as a potential dentinal hypersensitivity treatment for CDH. 45 human dentin surfaces were organized into three equal groups: group A received Cu-Br laser only; group B received bioglass only; group C received bioglass followed by Cu-Br laser irradiation. Specimens were evaluated with regard to dentinal tubule occlusion under environmental scanning electron microscope. Treatment modality significantly affected dentinal tubule occlusion (p<0.001). Groups B and C scored higher dentinal tubule occlusion than group A. Binary logistic regression showed that bioglass application significantly (p<0.001) contributed to dentinal tubule occlusion, compared with other variables. Under the conditions used herein and within the limitations of this study, bioglass application, alone or combined with Cu-Br laser irradiation, is a superior method for producing dentinal tubule occlusion, and may lead to an effective treatment modality for CDH.

Keywords: bioglass, Cu-Br laser, cervical dentinal hypersensitivity, dentinal tubule occlusion

Procedia PDF Downloads 358
31275 Microsimulation of Potential Crashes as a Road Safety Indicator

Authors: Vittorio Astarita, Giuseppe Guido, Vincenzo Pasquale Giofre, Alessandro Vitale

Abstract:

Traffic microsimulation has been used extensively to evaluate consequences of different traffic planning and control policies in terms of travel time delays, queues, pollutant emissions, and every other common measured performance while at the same time traffic safety has not been considered in common traffic microsimulation packages as a measure of performance for different traffic scenarios. Vehicle conflict techniques that were introduced at intersections in the early traffic researches carried out at the General Motor laboratory in the USA and in the Swedish traffic conflict manual have been applied to vehicles trajectories simulated in microscopic traffic simulators. The concept is that microsimulation can be used as a base for calculating the number of conflicts that will define the safety level of a traffic scenario. This allows engineers to identify unsafe road traffic maneuvers and helps in finding the right countermeasures that can improve safety. Unfortunately, most commonly used indicators do not consider conflicts between single vehicles and roadside obstacles and barriers. A great number of vehicle crashes take place with roadside objects or obstacles. Only some recent proposed indicators have been trying to address this issue. This paper introduces a new procedure based on the simulation of potential crash events for the evaluation of safety levels in microsimulation traffic scenarios, which takes into account also potential crashes with roadside objects and barriers. The procedure can be used to define new conflict indicators. The proposed simulation procedure generates with the random perturbation of vehicle trajectories a set of potential crashes which can be evaluated accurately in terms of DeltaV, the energy of the impact, and/or expected number of injuries or casualties. The procedure can also be applied to real trajectories giving birth to new surrogate safety performance indicators, which can be considered as “simulation-based”. The methodology and a specific safety performance indicator are described and applied to a simulated test traffic scenario. Results indicate that the procedure is able to evaluate safety levels both at the intersection level and in the presence of roadside obstacles. The procedure produces results that are expressed in the same unity of measure for both vehicle to vehicle and vehicle to roadside object conflicts. The total energy for a square meter of all generated crash can be used and is shown on the map, for the test network, after the application of a threshold to evidence the most dangerous points. Without any detailed calibration of the microsimulation model and without any calibration of the parameters of the procedure (standard values have been used), it is possible to identify dangerous points. A preliminary sensitivity analysis has shown that results are not dependent on the different energy thresholds and different parameters of the procedure. This paper introduces a specific new procedure and the implementation in the form of a software package that is able to assess road safety, also considering potential conflicts with roadside objects. Some of the principles that are at the base of this specific model are discussed. The procedure can be applied on common microsimulation packages once vehicle trajectories and the positions of roadside barriers and obstacles are known. The procedure has many calibration parameters and research efforts will have to be devoted to make confrontations with real crash data in order to obtain the best parameters that have the potential of giving an accurate evaluation of the risk of any traffic scenario.

Keywords: road safety, traffic, traffic safety, traffic simulation

Procedia PDF Downloads 139
31274 Agent-Based Modeling Investigating Self-Organization in Open, Non-equilibrium Thermodynamic Systems

Authors: Georgi Y. Georgiev, Matthew Brouillet

Abstract:

This research applies the power of agent-based modeling to a pivotal question at the intersection of biology, computer science, physics, and complex systems theory about the self-organization processes in open, complex, non-equilibrium thermodynamic systems. Central to this investigation is the principle of Maximum Entropy Production (MEP). This principle suggests that such systems evolve toward states that optimize entropy production, leading to the formation of structured environments. It is hypothesized that guided by the least action principle, open thermodynamic systems identify and follow the shortest paths to transmit energy and matter, resulting in maximal entropy production, internal structure formation, and a decrease in internal entropy. Concurrently, it is predicted that there will be an increase in system information as more information is required to describe the developing structure. To test this, an agent-based model is developed simulating an ant colony's formation of a path between a food source and its nest. Utilizing the Netlogo software for modeling and Python for data analysis and visualization, self-organization is quantified by calculating the decrease in system entropy based on the potential states and distribution of the ants within the simulated environment. External entropy production is also evaluated for information increase and efficiency improvements in the system's action. Simulations demonstrated that the system begins at maximal entropy, which decreases as the ants form paths over time. A range of system behaviors contingent upon the number of ants are observed. Notably, no path formation occurred with fewer than five ants, whereas clear paths were established by 200 ants, and saturation of path formation and entropy state was reached at populations exceeding 1000 ants. This analytical approach identified the inflection point marking the transition from disorder to order and computed the slope at this point. Combined with extrapolation to the final path entropy, these parameters yield important insights into the eventual entropy state of the system and the timeframe for its establishment, enabling the estimation of the self-organization rate. This study provides a novel perspective on the exploration of self-organization in thermodynamic systems, establishing a correlation between internal entropy decrease rate and external entropy production rate. Moreover, it presents a flexible framework for assessing the impact of external factors like changes in world size, path obstacles, and friction. Overall, this research offers a robust, replicable model for studying self-organization processes in any open thermodynamic system. As such, it provides a foundation for further in-depth exploration of the complex behaviors of these systems and contributes to the development of more efficient self-organizing systems across various scientific fields.

Keywords: complexity, self-organization, agent based modelling, efficiency

Procedia PDF Downloads 71
31273 Technologies for Phosphorus Removal from Wastewater: Review

Authors: Thandie Veronicah Sima, Moatlhodi Wiseman Letshwenyo

Abstract:

Discharge of wastewater is one of the major sources of phosphorus entering streams, lakes and other water bodies causing undesired environmental problem such as eutrophication. This condition not only puts the ecosystem at risk but also causes severe economic damages. Stringent laws have been developed globally by different bodies to control the level of phosphorus concentrations into receiving environments. In order to satisfy the constraints, a high degree of tertiary treatment or at least a significant reduction of phosphorus concentration is obligatory. This comprehensive review summarizes phosphorus removal technologies, from the most commonly used conventional technologies such as chemical precipitation through metal addition, membrane filtration, reverse osmosis and enhanced biological phosphorus removal using activated sludge system to passive systems such as constructed wetlands and filtration systems. Trends, perspectives and scientific procedures conducted by different researchers have been presented. This review critically evaluates the advantages and limitations behind each of the technologies. Enhancement of passive systems using reactive media such as industrial wastes to provide additional uptake through adsorption or precipitation is also discussed in this article.

Keywords: adsorption, chemical precipitation, enhanced biological phosphorus removal, phosphorus removal

Procedia PDF Downloads 330
31272 Qusai-Solid-State Electrochromic Device Based on PolyMethyl Methacrylate (PMMA)/Succinonitrile Gel Polymer Electrolyte

Authors: Jen-Yuan Wang, Min-Chuan Wang, Der-Jun Jan

Abstract:

Polymer electrolytes can be classified into four major categories, solid polymer electrolytes (SPEs), gel polymer electrolytes (GPEs), polyelectrolytes and composite polymer electrolytes. SPEs suffer from low ionic conductivity at room temperature. The main problems for GPEs are the poor thermal stability and mechanical properties. In this study, a GPE containing PMMA and succinonitrile is prepared to solve the problems mentioned above, and applied to the assembly of a quasi-solid-state electrochromic device (ECD). In the polymer electrolyte, poly(methyl methacrylate) (PMMA) is the polymer matrix and propylene carbonate (PC) is used as the plasticizer. To enhance the mechanical properties of this GPE, succinonitrile (SN) is introduced as the additive. For the electrochromic materials, tungsten oxide (WO3) is used as the cathodic coloring film, which is fabricated by pulsed dc magnetron reactive sputtering. For the anodic coloring material, Prussian blue nanoparticles (PBNPs) are synthesized and coated on the transparent Sn-doped indium oxide (ITO) glass. The thickness of ITO, WO3 and PB film is 110, 170 and 200 nm, respectively. The size of the ECD is 5×5 cm2. The effect of the introduction of SN into the GPEs is discussed by observing the electrochromic behaviors of the WO3-PB ECD. Besides, the composition ratio of PC to SN is also investigated by measuring the ionic conductivity. The optimized ratio of PC to SN is 4:1, and the ionic conductivity under this condition is 6.34x10-5 S∙cm-1, which is higher than that of PMMA/PC (1.35x10-6 S∙cm-1) and PMMA/EC/PC (4.52x10-6 S∙cm-1). This quasi-solid-state ECD fabricated with the PMMA/SN based GPE shows an optical contrast of ca. 53% at 690 nm. The optical transmittance of the ECD can be reversibly modulated from 72% (bleached) to 19% (darkened), by applying potentials of 1.5 and -2.2 V, respectively. During the durability test, the optical contrast of this ECD remains 44.5% after 2400 cycles, which is 83% of the original one.

Keywords: electrochromism, tungsten oxide, prussian blue, poly(methyl methacrylate), succinonitrile

Procedia PDF Downloads 304
31271 Applicability of Soybean as Bio-Catalyst in Calcite Precipitated Method for Soil Improvement

Authors: Heriansyah Putra, Erizal Erizal, Sutoyo Sutoyo, Hideaki Yasuhara

Abstract:

This paper discusses the possibility of organic waste material, i.e., soybean, as the bio-catalyst agent on the calcite precipitation method. Several combinations of soybean powder and jack bean extract are used as the bio-catalyst and mixed with the reagent composed of calcium chloride and urea. Its productivity in promoting calcite crystal is evaluated through a transparent test-tube experiment. The morphological and mineralogical aspects of precipitated calcite are also investigated using scanning electromagnetic (SEM) and X-ray diffraction (XRD), respectively. The applicability of this material to improve the engineering properties of soil are examined using the direct shear and unconfined compressive test. The result of this study shows that the utilization of soybean powder brings about a significant effect on soil strength. In addition, the use of soybean powder as a substitution material of urease enzyme also increases the efficacy of calcite crystal as the binder materials. The low calcite content promotes the high strength of the soil. The strength of 300 kPa is obtained in the presence of 2% of calcite content within the soil. The result of this study elucidated that substitution of soybean to jack bean extract is the potential and valuable alternative to improve the applicability of calcite precipitation method as soil improvement technique.

Keywords: calcite precipitation, jack bean, soil improvement, soybean

Procedia PDF Downloads 131
31270 Determination and Qsar Modelling of Partitioning Coefficients for Some Xenobiotics in Soils and Sediments

Authors: Alaa El-Din Rezk

Abstract:

For organic xenobiotics, sorption to Aldrich humic acid is a key process controlling their mobility, bioavailability, toxicity and fate in the soil. Hydrophobic organic compounds possessing either acid or basic groups can be partially ionized (deprotonated or protonated) within the range of natural soil pH. For neutral and ionogenicxenobiotics including (neutral, acids and bases) sorption coefficients normalized to organic carbon content, Koc, have measured at different pH values. To this end, the batch equilibrium technique has been used, employing SPME combined with GC-MSD as an analytical tool. For most ionogenic compounds, sorption has been affected by both pH and pKa and can be explained through Henderson-Hasselbalch equation. The results demonstrate that when assessing the environmental fate of ionogenic compounds, their pKa and speciation under natural conditions should be taken into account. A new model has developed to predict the relationship between log Koc and pH with full statistical evaluation against other existing predictive models. Neutral solutes have displayed a good fit with the classical model using log Kow as log Koc predictor, whereas acidic and basic compounds have displayed a good fit with the LSER approach and the new proposed model. Measurement limitations of the Batch technique and SPME-GC-MSD have been found with ionic compounds.

Keywords: humic acid, log Koc, pH, pKa, SPME-GCMSD

Procedia PDF Downloads 267
31269 A Two-Pronged Truncated Deferred Sampling Plan for Log-Logistic Distribution

Authors: Braimah Joseph Odunayo, Jiju Gillariose

Abstract:

This paper is aimed at developing a sampling plan that uses information from precedent and successive lots for lot disposition with a pretention that the life-time of a particular product assumes a Log-logistic distribution. A Two-pronged Truncated Deferred Sampling Plan (TTDSP) for Log-logistic distribution is proposed when the testing is truncated at a precise time. The best possible sample sizes are obtained under a given Maximum Allowable Percent Defective (MAPD), Test Suspension Ratios (TSR), and acceptance numbers (c). A formula for calculating the operating characteristics of the proposed plan is also developed. The operating characteristics and mean-ratio values were used to measure the performance of the plan. The findings of the study show that: Log-logistic distribution has a decreasing failure rate; furthermore, as mean-life ratio increase, the failure rate reduces; the sample size increase as the acceptance number, test suspension ratios and maximum allowable percent defective increases. The study concludes that the minimum sample sizes were smaller, which makes the plan a more economical plan to adopt when cost and time of production are costly and the experiment being destructive.

Keywords: consumers risk, mean life, minimum sample size, operating characteristics, producers risk

Procedia PDF Downloads 146
31268 Automatic Teller Machine System Security by Using Mobile SMS Code

Authors: Husnain Mushtaq, Mary Anjum, Muhammad Aleem

Abstract:

The main objective of this paper is used to develop a high security in Automatic Teller Machine (ATM). In these system bankers will collect the mobile numbers from the customers and then provide a code on their mobile number. In most country existing ATM machine use the magnetic card reader. The customer is identifying by inserting an ATM card with magnetic card that hold unique information such as card number and some security limitations. By entering a personal identification number, first the customer is authenticated then will access bank account in order to make cash withdraw or other services provided by the bank. Cases of card fraud are another problem once the user’s bank card is missing and the password is stolen, or simply steal a customer’s card & PIN the criminal will draw all cash in very short time, which will being great financial losses in customer, this type of fraud has increase worldwide. So to resolve this problem we are going to provide the solution using “Mobile SMS code” and ATM “PIN code” in order to improve the verify the security of customers using ATM system and confidence in the banking area.

Keywords: PIN, inquiry, biometric, magnetic strip, iris recognition, face recognition

Procedia PDF Downloads 370
31267 Performance Demonstration of Extendable NSPO Space-Borne GPS Receiver

Authors: Hung-Yuan Chang, Wen-Lung Chiang, Kuo-Liang Wu, Chen-Tsung Lin

Abstract:

National Space Organization (NSPO) has completed in 2014 the development of a space-borne GPS receiver, including design, manufacture, comprehensive functional test, environmental qualification test and so on. The main performance of this receiver include 8-meter positioning accuracy, 0.05 m/sec speed-accuracy, the longest 90 seconds of cold start time, and up to 15g high dynamic scenario. The receiver will be integrated in the autonomous FORMOSAT-7 NSPO-Built satellite scheduled to be launched in 2019 to execute pre-defined scientific missions. The flight model of this receiver manufactured in early 2015 will pass comprehensive functional tests and environmental acceptance tests, etc., which are expected to be completed by the end of 2015. The space-borne GPS receiver is a pure software design in which all GPS baseband signal processing are executed by a digital signal processor (DSP), currently only 50% of its throughput being used. In response to the booming global navigation satellite systems, NSPO will gradually expand this receiver to become a multi-mode, multi-band, high-precision navigation receiver, and even a science payload, such as the reflectometry receiver of a global navigation satellite system. The fundamental purpose of this extension study is to port some software algorithms such as signal acquisition and correlation, reused code and large amount of computation load to the FPGA whose processor is responsible for operational control, navigation solution, and orbit propagation and so on. Due to the development and evolution of the FPGA is pretty fast, the new system architecture upgraded via an FPGA should be able to achieve the goal of being a multi-mode, multi-band high-precision navigation receiver, or scientific receiver. Finally, the results of tests show that the new system architecture not only retains the original overall performance, but also sets aside more resources available for future expansion possibility. This paper will explain the detailed DSP/FPGA architecture, development, test results, and the goals of next development stage of this receiver.

Keywords: space-borne, GPS receiver, DSP, FPGA, multi-mode multi-band

Procedia PDF Downloads 374
31266 Approximate-Based Estimation of Single Event Upset Effect on Statistic Random-Access Memory-Based Field-Programmable Gate Arrays

Authors: Mahsa Mousavi, Hamid Reza Pourshaghaghi, Mohammad Tahghighi, Henk Corporaal

Abstract:

Recently, Statistic Random-Access Memory-based (SRAM-based) Field-Programmable Gate Arrays (FPGAs) are widely used in aeronautics and space systems where high dependability is demanded and considered as a mandatory requirement. Since design’s circuit is stored in configuration memory in SRAM-based FPGAs; they are very sensitive to Single Event Upsets (SEUs). In addition, the adverse effects of SEUs on the electronics used in space are much higher than in the Earth. Thus, developing fault tolerant techniques play crucial roles for the use of SRAM-based FPGAs in space. However, fault tolerance techniques introduce additional penalties in system parameters, e.g., area, power, performance and design time. In this paper, an accurate estimation of configuration memory vulnerability to SEUs is proposed for approximate-tolerant applications. This vulnerability estimation is highly required for compromising between the overhead introduced by fault tolerance techniques and system robustness. In this paper, we study applications in which the exact final output value is not necessarily always a concern meaning that some of the SEU-induced changes in output values are negligible. We therefore define and propose Approximate-based Configuration Memory Vulnerability Factor (ACMVF) estimation to avoid overestimating configuration memory vulnerability to SEUs. In this paper, we assess the vulnerability of configuration memory by injecting SEUs in configuration memory bits and comparing the output values of a given circuit in presence of SEUs with expected correct output. In spite of conventional vulnerability factor calculation methods, which accounts any deviations from the expected value as failures, in our proposed method a threshold margin is considered depending on user-case applications. Given the proposed threshold margin in our model, a failure occurs only when the difference between the erroneous output value and the expected output value is more than this margin. The ACMVF is subsequently calculated by acquiring the ratio of failures with respect to the total number of SEU injections. In our paper, a test-bench for emulating SEUs and calculating ACMVF is implemented on Zynq-7000 FPGA platform. This system makes use of the Single Event Mitigation (SEM) IP core to inject SEUs into configuration memory bits of the target design implemented in Zynq-7000 FPGA. Experimental results for 32-bit adder show that, when 1% to 10% deviation from correct output is considered, the counted failures number is reduced 41% to 59% compared with the failures number counted by conventional vulnerability factor calculation. It means that estimation accuracy of the configuration memory vulnerability to SEUs is improved up to 58% in the case that 10% deviation is acceptable in output results. Note that less than 10% deviation in addition result is reasonably tolerable for many applications in approximate computing domain such as Convolutional Neural Network (CNN).

Keywords: fault tolerance, FPGA, single event upset, approximate computing

Procedia PDF Downloads 200
31265 Comparing Stability Index MAPping (SINMAP) Landslide Susceptibility Models in the Río La Carbonera, Southeast Flank of Pico de Orizaba Volcano, Mexico

Authors: Gabriel Legorreta Paulin, Marcus I. Bursik, Lilia Arana Salinas, Fernando Aceves Quesada

Abstract:

In volcanic environments, landslides and debris flows occur continually along stream systems of large stratovolcanoes. This is the case on Pico de Orizaba volcano, the highest mountain in Mexico. The volcano has a great potential to impact and damage human settlements and economic activities by landslides. People living along the lower valleys of Pico de Orizaba volcano are in continuous hazard by the coalescence of upstream landslide sediments that increased the destructive power of debris flows. These debris flows not only produce floods, but also cause the loss of lives and property. Although the importance of assessing such process, there is few landslide inventory maps and landslide susceptibility assessment. As a result in México, no landslide susceptibility models assessment has been conducted to evaluate advantage and disadvantage of models. In this study, a comprehensive study of landslide susceptibility models assessment using GIS technology is carried out on the SE flank of Pico de Orizaba volcano. A detailed multi-temporal landslide inventory map in the watershed is used as framework for the quantitative comparison of two landslide susceptibility maps. The maps are created based on 1) the Stability Index MAPping (SINMAP) model by using default geotechnical parameters and 2) by using findings of volcanic soils geotechnical proprieties obtained in the field. SINMAP combines the factor of safety derived from the infinite slope stability model with the theory of a hydrologic model to produce the susceptibility map. It has been claimed that SINMAP analysis is reasonably successful in defining areas that intuitively appear to be susceptible to landsliding in regions with sparse information. The validations of the resulting susceptibility maps are performed by comparing them with the inventory map under LOGISNET system which provides tools to compare by using a histogram and a contingency table. Results of the experiment allow for establishing how the individual models predict the landslide location, advantages, and limitations. The results also show that although the model tends to improve with the use of calibrated field data, the landslide susceptibility map does not perfectly represent existing landslides.

Keywords: GIS, landslide, modeling, LOGISNET, SINMAP

Procedia PDF Downloads 320
31264 Textile-Based Sensing System for Sleep Apnea Detection

Authors: Mary S. Ruppert-Stroescu, Minh Pham, Bruce Benjamin

Abstract:

Sleep apnea is a condition where a person stops breathing and can lead to cardiovascular disease, hypertension, and stroke. In the United States, approximately forty percent of overnight sleep apnea detection tests are cancelled. The purpose of this study was to develop a textile-based sensing system that acquires biometric signals relevant to cardiovascular health, to transmit them wirelessly to a computer, and to quantitatively assess the signals for sleep apnea detection. Patient interviews, literature review and market analysis defined a need for a device that ubiquitously integrated into the patient’s lifestyle. A multi-disciplinary research team of biomedical scientists, apparel designers, and computer engineers collaborated to design a textile-based sensing system that gathers EKG, Sp02, and respiration, then wirelessly transmits the signals to a computer in real time. The electronic components were assembled from existing hardware, the Health Kit which came pre-set with EKG and Sp02 sensors. The respiration belt was purchased separately and its electronics were built and integrated into the Health Kit mother board. Analog ECG signals were amplified and transmitted to the Arduino™ board where the signal was converted from analog into digital. By using textile electrodes, ECG lead-II was collected, and it reflected the electrical activity of the heart. Signals were collected when the subject was in sitting position and at sampling rate of 250 Hz. Because sleep apnea most often occurs in people with obese body types, prototypes were developed for a man’s size medium, XL, and XXL. To test user acceptance and comfort, wear tests were performed on 12 subjects. Results of the wear tests indicate that the knit fabric and t-shirt-like design were acceptable from both lifestyle and comfort perspectives. The airflow signal and respiration signal sensors return good signals regardless of movement intensity. Future study includes reconfiguring the hardware to a smaller size, developing the same type of garment for the female body, and further enhancing the signal quality.

Keywords: sleep apnea, sensors, electronic textiles, wearables

Procedia PDF Downloads 277
31263 Seismic Response of Structure Using a Three Degree of Freedom Shake Table

Authors: Ketan N. Bajad, Manisha V. Waghmare

Abstract:

Earthquakes are the biggest threat to the civil engineering structures as every year it cost billions of dollars and thousands of deaths, around the world. There are various experimental techniques such as pseudo-dynamic tests – nonlinear structural dynamic technique, real time pseudo dynamic test and shaking table test method that can be employed to verify the seismic performance of structures. Shake table is a device that is used for shaking structural models or building components which are mounted on it. It is a device that simulates a seismic event using existing seismic data and nearly truly reproducing earthquake inputs. This paper deals with the use of shaking table test method to check the response of structure subjected to earthquake. The various types of shake table are vertical shake table, horizontal shake table, servo hydraulic shake table and servo electric shake table. The goal of this experiment is to perform seismic analysis of a civil engineering structure with the help of 3 degree of freedom (i.e. in X Y Z direction) shake table. Three (3) DOF shaking table is a useful experimental apparatus as it imitates a real time desired acceleration vibration signal for evaluating and assessing the seismic performance of structure. This study proceeds with the proper designing and erection of 3 DOF shake table by trial and error method. The table is designed to have a capacity up to 981 Newton. Further, to study the seismic response of a steel industrial building, a proportionately scaled down model is fabricated and tested on the shake table. The accelerometer is mounted on the model, which is used for recording the data. The experimental results obtained are further validated with the results obtained from software. It is found that model can be used to determine how the structure behaves in response to an applied earthquake motion, but the model cannot be used for direct numerical conclusions (such as of stiffness, deflection, etc.) as many uncertainties involved while scaling a small-scale model. The model shows modal forms and gives the rough deflection values. The experimental results demonstrate shake table as the most effective and the best of all methods available for seismic assessment of structure.

Keywords: accelerometer, three degree of freedom shake table, seismic analysis, steel industrial shed

Procedia PDF Downloads 149
31262 Examining the Attitudes of Pre-School Teachers towards Values Education in Terms of Gender, School Type, Professional Seniority and Location

Authors: Hatice Karakoyun, Mustafa Akdag

Abstract:

This study has been made to examine the attitudes of pre-school teachers towards values education. The study has been made as a general scanning model. The study’s working group contains 108 pre-school teachers who worked in Diyarbakır, Turkey. In this study Values Education Attitude Scale (VEAS), which developed by Yaşaroğlu (2014), was used. In order to analyze the data for sociodemographic structure, percentage and frequency values were examined. The Kolmogorov-Smirnov method was used in determination of the normal distribution of data. During analyzing the data, KolmogorovSimirnov test and the normal curved histograms were examined to determine which statistical analyzes would be applied on the scale and it was found that the distribution was not normal. Thus, the Mann Whitney U analysis technique which is one of the nonparametric statistical analysis techniques were used to test the difference of the scores obtained from the scale in terms of independent variables. According to the analyses, it seems that pre-school teachers’ attitudes toward values education are positive. According to the scale with the highest average, it points out that pre-school teachers think that values education is very important for students’ and children’s future. The variables included in the scale (gender, seniority, age group, education, school type, school place) seem to have no effect on the pre-school teachers’ attitude grades which joined to the study.

Keywords: attitude scale, pedagogy, pre-school teacher, values education

Procedia PDF Downloads 250
31261 Experimental Partial Discharge Localization for Internal Short Circuits of Transformers Windings

Authors: Jalal M. Abdallah

Abstract:

This paper presents experimental studies carried out on a three phase transformer to investigate and develop the transformer models, which help in testing procedures, describing and evaluating the transformer dielectric conditions process and methods such as: the partial discharge (PD) localization in windings. The measurements are based on the transfer function methods in transformer windings by frequency response analysis (FRA). Numbers of tests conditions were applied to obtain the sensitivity frequency responses of a transformer for different type of faults simulated in a particular phase. The frequency responses were analyzed for the sensitivity of different test conditions to detect and identify the starting of small faults, which are sources of PD. In more detail, the aim is to explain applicability and sensitivity of advanced PD measurements for small short circuits and its localization. The experimental results presented in the paper will help in understanding the sensitivity of FRA measurements in detecting various types of internal winding short circuits in the transformer.

Keywords: frequency response analysis (FRA), measurements, transfer function, transformer

Procedia PDF Downloads 284
31260 A Fractional Derivative Model to Quantify Non-Darcy Flow in Porous and Fractured Media

Authors: Golden J. Zhang, Dongbao Zhou

Abstract:

Darcy’s law is the fundamental theory in fluid dynamics and engineering applications. Although Darcy linearity was found to be valid for slow, viscous flow, non-linear and non-Darcian flow has been well documented under both small and large velocity fluid flow. Various classical models were proposed and used widely to quantify non-Darcian flow, including the well-known Forchheimer, Izbash, and Swartzendruber models. Applications, however, revealed limitations of these models. Here we propose a general model built upon the Caputo fractional derivative to quantify non-Darcian flow for various flows (laminar to turbulence).Real-world applications and model comparisons showed that the new fractional-derivative model, which extends the fractional model proposed recently by Zhou and Yang (2018), can capture the non-Darcian flow in the relatively small velocity in low-permeability deposits and the relatively high velocity in high-permeability sand. A scale effect was also identified for non-Darcian flow in fractured rocks. Therefore, fractional calculus may provide an efficient tool to improve classical models to quantify fluid dynamics in aquatic environments.

Keywords: fractional derivative, darcy’s law, non-darcian flow, fluid dynamics

Procedia PDF Downloads 130
31259 Investigations on the Application of Avalanche Simulations: A Survey Conducted among Avalanche Experts

Authors: Korbinian Schmidtner, Rudolf Sailer, Perry Bartelt, Wolfgang Fellin, Jan-Thomas Fischer, Matthias Granig

Abstract:

This study focuses on the evaluation of snow avalanche simulations, based on a survey that has been carried out among avalanche experts. In the last decades, the application of avalanche simulation tools has gained recognition within the realm of hazard management. Traditionally, avalanche runout models were used to predict extreme avalanche runout and prepare avalanche maps. This has changed rather dramatically with the application of numerical models. For safety regulations such as road safety simulation tools are now being coupled with real-time meteorological measurements to predict frequent avalanche hazard. That places new demands on model accuracy and requires the simulation of physical processes that previously could be ignored. These simulation tools are based on a deterministic description of the avalanche movement allowing to predict certain quantities (e.g. pressure, velocities, flow heights, runout lengths etc.) of the avalanche flow. Because of the highly variable regimes of the flowing snow, no uniform rheological law describing the motion of an avalanche is known. Therefore, analogies to fluid dynamical laws of other materials are stated. To transfer these constitutional laws to snow flows, certain assumptions and adjustments have to be imposed. Besides these limitations, there exist high uncertainties regarding the initial and boundary conditions. Further challenges arise when implementing the underlying flow model equations into an algorithm executable by a computer. This implementation is constrained by the choice of adequate numerical methods and their computational feasibility. Hence, the model development is compelled to introduce further simplifications and the related uncertainties. In the light of these issues many questions arise on avalanche simulations, on their assets and drawbacks, on potentials for improvements as well as their application in practice. To address these questions a survey among experts in the field of avalanche science (e.g. researchers, practitioners, engineers) from various countries has been conducted. In the questionnaire, special attention is drawn on the expert’s opinion regarding the influence of certain variables on the simulation result, their uncertainty and the reliability of the results. Furthermore, it was tested to which degree a simulation result influences the decision making for a hazard assessment. A discrepancy could be found between a large uncertainty of the simulation input parameters as compared to a relatively high reliability of the results. This contradiction can be explained taking into account how the experts employ the simulations. The credibility of the simulations is the result of a rather thoroughly simulation study, where different assumptions are tested, comparing the results of different flow models along with the use of supplemental data such as chronicles, field observation, silent witnesses i.a. which are regarded as essential for the hazard assessment and for sanctioning simulation results. As the importance of avalanche simulations grows within the hazard management along with their further development studies focusing on the modeling fashion could contribute to a better understanding how knowledge of the avalanche process can be gained by running simulations.

Keywords: expert interview, hazard management, modeling, simulation, snow avalanche

Procedia PDF Downloads 330
31258 Identification of Failures Occurring on a System on Chip Exposed to a Neutron Beam for Safety Applications

Authors: S. Thomet, S. De-Paoli, F. Ghaffari, J. M. Daveau, P. Roche, O. Romain

Abstract:

In this paper, we present a hardware module dedicated to understanding the fail reason of a System on Chip (SoC) exposed to a particle beam. Impact of Single-Event Effects (SEE) on processor-based SoCs is a concern that has increased in the past decade, particularly for terrestrial applications with automotive safety increasing requirements, as well as consumer and industrial domains. The SEE created by the impact of a particle on an SoC may have consequences that can end to instability or crashes. Specific hardening techniques for hardware and software have been developed to make such systems more reliable. SoC is then qualified using cosmic ray Accelerated Soft-Error Rate (ASER) to ensure the Soft-Error Rate (SER) remains in mission profiles. Understanding where errors are occurring is another challenge because of the complexity of operations performed in an SoC. Common techniques to monitor an SoC running under a beam are based on non-intrusive debug, consisting of recording the program counter and doing some consistency checking on the fly. To detect and understand SEE, we have developed a module embedded within the SoC that provide support for recording probes, hardware watchpoints, and a memory mapped register bank dedicated to software usage. To identify CPU failure modes and the most important resources to probe, we have carried out a fault injection campaign on the RTL model of the SoC. Probes are placed on generic CPU registers and bus accesses. They highlight the propagation of errors and allow identifying the failure modes. Typical resulting errors are bit-flips in resources creating bad addresses, illegal instructions, longer than expected loops, or incorrect bus accesses. Although our module is processor agnostic, it has been interfaced to a RISC-V by probing some of the processor registers. Probes are then recorded in a ring buffer. Associated hardware watchpoints are allowing to do some control, such as start or stop event recording or halt the processor. Finally, the module is also providing a bank of registers where the firmware running on the SoC can log information. Typical usage is for operating system context switch recording. The module is connected to a dedicated debug bus and is interfaced to a remote controller via a debugger link. Thus, a remote controller can interact with the monitoring module without any intrusiveness on the SoC. Moreover, in case of CPU unresponsiveness, or system-bus stall, the recorded information can still be recovered, providing the fail reason. A preliminary version of the module has been integrated into a test chip currently being manufactured at ST in 28-nm FDSOI technology. The module has been triplicated to provide reliable information on the SoC behavior. As the primary application domain is automotive and safety, the efficiency of the module will be evaluated by exposing the test chip under a fast-neutron beam by the end of the year. In the meantime, it will be tested with alpha particles and electromagnetic fault injection (EMFI). We will report in the paper on fault-injection results as well as irradiation results.

Keywords: fault injection, SoC fail reason, SoC soft error rate, terrestrial application

Procedia PDF Downloads 233
31257 Effectiveness of Self-Learning Module on the Academic Performance of Students in Statistics and Probability

Authors: Aneia Rajiel Busmente, Renato Gunio Jr., Jazin Mautante, Denise Joy Mendoza, Raymond Benedict Tagorio, Gabriel Uy, Natalie Quinn Valenzuela, Ma. Elayza Villa, Francine Yezha Vizcarra, Sofia Madelle Yapan, Eugene Kurt Yboa

Abstract:

COVID-19’s rapid spread caused a dramatic change in the nation, especially the educational system. The Department of Education was forced to adopt a practical learning platform without neglecting health, a printed modular distance learning. The Philippines' K–12 curriculum includes Statistics and Probability as one of the key courses as it offers students the knowledge to evaluate and comprehend data. Due to student’s difficulty and lack of understanding of the concepts of Statistics and Probability in Normal Distribution. The Self-Learning Module in Statistics and Probability about the Normal Distribution created by the Department of Education has several problems, including many activities, unclear illustrations, and insufficient examples of concepts which enables learners to have a difficulty accomplishing the module. The purpose of this study is to determine the effectiveness of self-learning module on the academic performance of students in the subject Statistics and Probability, it will also explore students’ perception towards the quality of created Self-Learning Module in Statistics and Probability. Despite the availability of Self-Learning Modules in Statistics and Probability in the Philippines, there are still few literatures that discuss its effectiveness in improving the performance of Senior High School students in Statistics and Probability. In this study, a Self-Learning Module on Normal Distribution is evaluated using a quasi-experimental design. STEM students in Grade 11 from National University's Nazareth School will be the study's participants, chosen by purposive sampling. Google Forms will be utilized to find at least 100 STEM students in Grade 11. The research instrument consists of 20-item pre- and post-test to assess participants' knowledge and performance regarding Normal Distribution, and a Likert scale survey to evaluate how the students perceived the self-learning module. Pre-test, post-test, and Likert scale surveys will be utilized to gather data, with Jeffreys' Amazing Statistics Program (JASP) software being used for analysis.

Keywords: self-learning module, academic performance, statistics and probability, normal distribution

Procedia PDF Downloads 121
31256 Innovative Tool for Improving Teaching and Learning

Authors: Izharul Haq

Abstract:

Every one of us seek to aspire to gain quality education. The biggest stake holders are students who labor through years acquiring knowledge and skill to help them prepare for their career. Parents spend a fortune on their children’s education. Companies spend billions of dollars to enhance standards by developing new education products and services. Quality education is the golden key to a long lasting prosperity for the individual and the nation. But unfortunately, education standards are continuously deteriorating and it has become a global phenomenon. Unfortunately, teaching is often described as a ‘popularity contest’ and those teachers who are usually popular with students are often those who compromise teaching to appease students. Such teachers also ‘teach-to-the-test’ ensuring high test scores. Such teachers, hence, receive good student rating. Teachers who are conscientious, rigorous and thorough are often the victims of good appraisal. Government and private organizations are spending billions of dollars trying to capture the characteristics of a good teacher. But the results are still vague and inconclusive. At present there is no objective way to measure teaching effectiveness. In this paper we present an innovative method to objectively measure teaching effectiveness using a new teaching tool (TSquare). The TSquare tool used in the study is practical, easy to use, cost effective and requires no special equipment to implement. Hence it has a global appeal for poor and the rich countries alike.

Keywords: measuring teaching effectiveness, quality in education, student learning, teaching styles

Procedia PDF Downloads 301
31255 Toward a Characteristic Optimal Power Flow Model for Temporal Constraints

Authors: Zongjie Wang, Zhizhong Guo

Abstract:

While the regular optimal power flow model focuses on a single time scan, the optimization of power systems is typically intended for a time duration with respect to a desired objective function. In this paper, a temporal optimal power flow model for a time period is proposed. To reduce the computation burden needed for calculating temporal optimal power flow, a characteristic optimal power flow model is proposed, which employs different characteristic load patterns to represent the objective function and security constraints. A numerical method based on the interior point method is also proposed for solving the characteristic optimal power flow model. Both the temporal optimal power flow model and characteristic optimal power flow model can improve the systems’ desired objective function for the entire time period. Numerical studies are conducted on the IEEE 14 and 118-bus test systems to demonstrate the effectiveness of the proposed characteristic optimal power flow model.

Keywords: optimal power flow, time period, security, economy

Procedia PDF Downloads 454
31254 An Exploration of the Association Between the Physical Activity and Academic Performance in Internship Medical Students

Authors: Ali Ashraf, Ghazaleh Aghaee, Sedigheh Samimian, Mohaya Farzin

Abstract:

Objectives: Previous studies have indicated the positive effect of physical activity and sports on different aspects of health, such as muscle endurance and sleep cycle. However, in university students, particularly medical students, who have limited time and a stressful lifestyle, there have been limited studies exploring this matter with proven statistical results. In this regard, this study aims to find out how regular physical activity can influence the academic performance of medical students during their internship period. Methods: This was a descriptive-analytical study. Overall, 160 medical students (including 80 women and 88 men) voluntarily participated in the study. The Baecke Physical Activity Questionnaire was applied to determine the student’s physical activity levels. The student's academic performance was determined based on their total average academic scores. The data were analyzed in SPSS version 16 software using the independent t-test, Pearson correlation, and linear regression. Results: The average age of the students was 26.0±1.5 years. Eighty-eight students (52.4%) were male, and 142 (84.5%) were single. The student's mean total average academic score was 16.2±1.2, and their average physical activity score was 8.3±1.1. The student's average academic score was not associated with their gender (P=0.427), marital status (P=0.645), and age (P=0.320). However, married students had a significantly lower physical activity level compared to single students (P=0.020). The results indicated a significant positive correlation between student's physical activity levels and average academic scores (r=+0.410 and P<0.001). This correlation was independent of the student’s age, gender, and marital status based on the regression analysis. Conclusion: The results of the current study suggested that the physical activity level in medical students was low to moderate in most cases, and there was a significant direct relationship between student’s physical activity level and academic performance, independent of age, gender, and marital status.

Keywords: exercise, education, physical activity, academic performance

Procedia PDF Downloads 54
31253 Methodology for the Multi-Objective Analysis of Data Sets in Freight Delivery

Authors: Dale Dzemydiene, Aurelija Burinskiene, Arunas Miliauskas, Kristina Ciziuniene

Abstract:

Data flow and the purpose of reporting the data are different and dependent on business needs. Different parameters are reported and transferred regularly during freight delivery. This business practices form the dataset constructed for each time point and contain all required information for freight moving decisions. As a significant amount of these data is used for various purposes, an integrating methodological approach must be developed to respond to the indicated problem. The proposed methodology contains several steps: (1) collecting context data sets and data validation; (2) multi-objective analysis for optimizing freight transfer services. For data validation, the study involves Grubbs outliers analysis, particularly for data cleaning and the identification of statistical significance of data reporting event cases. The Grubbs test is often used as it measures one external value at a time exceeding the boundaries of standard normal distribution. In the study area, the test was not widely applied by authors, except when the Grubbs test for outlier detection was used to identify outsiders in fuel consumption data. In the study, the authors applied the method with a confidence level of 99%. For the multi-objective analysis, the authors would like to select the forms of construction of the genetic algorithms, which have more possibilities to extract the best solution. For freight delivery management, the schemas of genetic algorithms' structure are used as a more effective technique. Due to that, the adaptable genetic algorithm is applied for the description of choosing process of the effective transportation corridor. In this study, the multi-objective genetic algorithm methods are used to optimize the data evaluation and select the appropriate transport corridor. The authors suggest a methodology for the multi-objective analysis, which evaluates collected context data sets and uses this evaluation to determine a delivery corridor for freight transfer service in the multi-modal transportation network. In the multi-objective analysis, authors include safety components, the number of accidents a year, and freight delivery time in the multi-modal transportation network. The proposed methodology has practical value in the management of multi-modal transportation processes.

Keywords: multi-objective, analysis, data flow, freight delivery, methodology

Procedia PDF Downloads 182
31252 Relationship between Effective Classroom Management with Students’ Academic Achievement of EFL of STKIP YPUP

Authors: Eny Syatriana

Abstract:

The purpose of this study is to find out the effective instruction for classroom management, with the main identification of organizing and managing effective learning environments, to identify characteristics of effective lesson planning, identify resources and materials dealing with positive and effective classroom management. Knowing the effective instruction management is one of the characteristics of well managed teacher. The study was carried out in three randomly selected classes of STKIP YPUP in South Sulawesi. The design adopted for the study was a descriptive survey approach. Simple descriptive analysis was used. The major instrument used in this study were student questionnaire, teacher questionnaire, data were gathered with the research instrument and were analyzed, the research question were investigated and two hypothesis were duly tested using t-test statistics. Based on the findings of this research, it was concluded that effective classroom management skills or techniques have strong and positive influence on student achievement.

Keywords: effective classroom management skills, students’ achievement, students academic, effective learning environments

Procedia PDF Downloads 335