Search results for: time scale
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7594

Search results for: time scale

4864 Steepest Descent Method with New Step Sizes

Authors: Bib Paruhum Silalahi, Djihad Wungguli, Sugi Guritman

Abstract:

Steepest descent method is a simple gradient method for optimization. This method has a slow convergence in heading to the optimal solution, which occurs because of the zigzag form of the steps. Barzilai and Borwein modified this algorithm so that it performs well for problems with large dimensions. Barzilai and Borwein method results have sparked a lot of research on the method of steepest descent, including alternate minimization gradient method and Yuan method. Inspired by previous works, we modified the step size of the steepest descent method. We then compare the modification results against the Barzilai and Borwein method, alternate minimization gradient method and Yuan method for quadratic function cases in terms of the iterations number and the running time. The average results indicate that the steepest descent method with the new step sizes provide good results for small dimensions and able to compete with the results of Barzilai and Borwein method and the alternate minimization gradient method for large dimensions. The new step sizes have faster convergence compared to the other methods, especially for cases with large dimensions.

Keywords: Convergence, iteration, line search, running time, steepest descent, unconstrained optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3160
4863 Light Emission Enhancement of Silicon Nanocrystals by Gold Layer

Authors: R. Karmouch

Abstract:

A thin gold metal layer was deposited on the top of silicon oxide films containing embedded Si nanocrystals (Si-nc). The sample was annealed in a gas containing nitrogen, and subsequently characterized by photoluminescence. We obtained 3-fold enhancement of photon emission from the Si-nc embedded in silicon dioxide covered with a Gold layer as compared with an uncovered sample. We attribute this enhancement to the increase of the spontaneous emission rate caused by the coupling of the Si-nc emitters with the surface plasmons (SP). The evolution of PL emission with laser irradiated time was also collected from covered samples, and compared to that from uncovered samples. In an uncovered sample, the PL intensity decreases with time, approximately with two decay constants. Although the decrease of the initial PL intensity associated with the increase of sample temperature under CW pumping is still observed in samples covered with a gold layer, this film significantly contributes to reduce the permanent deterioration of the PL intensity. The resistance to degradation of light-emitting silicon nanocrystals can be increased by SP coupling to suppress the permanent deterioration. Controlling the permanent photodeterioration can allow to perform a reliable optical gain measurement.

Keywords: Photodeterioration, Silicon Nanocrystals, Ion Implantation, Photoluminescence, Surface Plasmons.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1868
4862 Effect of Vibration Amplitude and Welding Force on Weld Strength of Ultrasonic Metal Welding

Authors: Ziad. Sh. Al Sarraf

Abstract:

Ultrasonic metal welding has been the subject of ongoing research and development, most recently concentrating on metal joining in miniature devices, for example to allow solder-free wire bonding. As well as at the small scale, there are also opportunities to research the joining of thicker sheet metals and to widen the range of similar and dissimilar materials that can be successfully joined using this technology. This study presents the design, characterisation and test of a lateral-drive ultrasonic metal spot welding device. The ultrasonic metal spot welding horn is modelled using finite element analysis (FEA) and its vibration behaviour is characterised experimentally to ensure ultrasonic energy is delivered effectively to the weld coupon. The welding stack and fixtures are then designed and mounted on a test machine to allow a series of experiments to be conducted for various welding and ultrasonic parameters. Weld strength is subsequently analysed using tensile-shear tests. The results show how the weld strength is particularly sensitive to the combination of clamping force and ultrasonic vibration amplitude of the welding tip, but there are optimal combinations of these and also limits that must be clearly identified.

Keywords: Ultrasonic welding, vibration amplitude, welding force, weld strength.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2189
4861 The Link between Unemployment and Inflation Using Johansen’s Co-Integration Approach and Vector Error Correction Modelling

Authors: Sagaren Pillay

Abstract:

In this paper bi-annual time series data on unemployment rates (from the Labour Force Survey) are expanded to quarterly rates and linked to quarterly unemployment rates (from the Quarterly Labour Force Survey). The resultant linked series and the consumer price index (CPI) series are examined using Johansen’s cointegration approach and vector error correction modeling. The study finds that both the series are integrated of order one and are cointegrated. A statistically significant co-integrating relationship is found to exist between the time series of unemployment rates and the CPI. Given this significant relationship, the study models this relationship using Vector Error Correction Models (VECM), one with a restriction on the deterministic term and the other with no restriction.

A formal statistical confirmation of the existence of a unique linear and lagged relationship between inflation and unemployment for the period between September 2000 and June 2011 is presented. For the given period, the CPI was found to be an unbiased predictor of the unemployment rate. This relationship can be explored further for the development of appropriate forecasting models incorporating other study variables.

Keywords: Forecasting, lagged, linear, relationship.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2542
4860 Fuzzy Analytic Hierarchy Process for Determination of Supply Chain Performance Evaluation Criteria

Authors: Ibrahim Cil, Onur Kurtcu, H. Ibrahim Demir, Furkan Yener, Yusuf. S. Turkan, Muharrem Unver, Ramazan Evren

Abstract:

Fuzzy AHP (Analytic Hierarchy Process) method is decision-making way at the end of integrating the current AHP method with fuzzy structure. In this study, the processes of production planning, inventory management and purchasing department of a system were analysed and were requested to decide the performance criteria of each area. At this point, the current work processes were analysed by various decision-makers and comparing each criteria by giving points according to 1-9 scale were completed. The criteria were listed in order to their weights by using Fuzzy AHP approach and top three performance criteria of each department were determined. After that, the performance criteria of supply chain consisting of three departments were asked to determine. The processes of each department were compared by decision-makers at the point of building the supply chain performance system and getting the performance criteria. According to the results, the criteria of performance system of supply chain by using Fuzzy AHP were determined for which will be used in the supply chain performance system in the future.

Keywords: AHP, fuzzy, performance evaluation, supply chain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1058
4859 Developing Road Performance Measurement System with Evaluation Instrument

Authors: Kati Kõrbe Kaare, Kristjan Kuhi, Ott Koppel

Abstract:

Transportation authorities need to provide the services and facilities that are critical to every country-s well-being and development. Management of the road network is becoming increasingly challenging as demands increase and resources are limited. Public sector institutions are integrating performance information into budgeting, managing and reporting via implementing performance measurement systems. In the face of growing challenges, performance measurement of road networks is attracting growing interest in many countries. The large scale of public investments makes the maintenance and development of road networks an area where such systems are an important assessment tool. Transportation agencies have been using performance measurement and modeling as part of pavement and bridge management systems. Recently the focus has been on extending the process to applications in road construction and maintenance systems, operations and safety programs, and administrative structures and procedures. To eliminate failure and dysfunctional consequences the importance of obtaining objective data and implementing evaluation instrument where necessary is presented in this paper

Keywords: Key performance indicators, performance measurement system, evaluation, system architecture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2011
4858 Implementation the Average Input Current Mode Control of Two-Phase Interleaved Boost Converter Using Low-Cost Microcontroller

Authors: Yin Yin Phyo, Tun Lin Naing

Abstract:

In this paper, the average input current mode control is proposed for two-phase interleaved boost converter with two separate input inductors operating in continuous conduction mode (CCM). The required mathematical model is obtained from the equivalent circuits of its different four modes of operation. The small ripple approximation is derived to find the transfer functions from dynamic model using switching function. In average input current mode control, the inner current loop and outer voltage loop are designed with PI controller using bode analysis. Anti-windup structure is applied for PI controllers in control system. Moreover, the simulation work is carried out by MATLAB/Simulink. And, the hardware prototype is implemented by using low-cost microcontroller Arduino Nano. Finally, the laboratory prototype, available from the local market, is constructed to validate the mathematical model. The results show that the output voltage response is the faster rise time and settling time with acceptable overshoot.

Keywords: Average input current mode control, interleaved boost converter, low-cost microcontroller, PI controller, switching function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1351
4857 Implementation of the Outputs of Computer Simulation to Support Decision-Making Processes

Authors: Jiří Barta

Abstract:

At the present time, awareness, education, computer simulation and information systems protection are very serious and relevant topics. The article deals with perspectives and possibilities of implementation of emergence or natural hazard threats into the system which is developed for communication among members of crisis management staffs. The Czech Hydro-Meteorological Institute with its System of Integrated Warning Service resents the largest usable base of information. National information systems are connected to foreign systems, especially to flooding emergency systems of neighboring countries, systems of European Union and international organizations where the Czech Republic is a member. Use of outputs of particular information systems and computer simulations on a single communication interface of information system for communication among members of crisis management staff and setting the site interoperability in the net will lead to time savings in decision-making processes in solving extraordinary events and crisis situations. Faster managing of an extraordinary event or a crisis situation will bring positive effects and minimize the impact of negative effects on the environment.

Keywords: Computer simulation, communication, continuity, critical infrastructure, information systems, safety.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1719
4856 Vulnerability Analysis for Risk Zones Boundary Definition to Support a Decision Making Process at CBRNE Operations

Authors: Aliaksei Patsekha, Michael Hohenberger, Harald Raupenstrauch

Abstract:

An effective emergency response to accidents with chemical, biological, radiological, nuclear, or explosive materials (CBRNE) that represent highly dynamic situations needs immediate actions within limited time, information and resources. The aim of the study is to provide the foundation for division of unsafe area into risk zones according to the impact of hazardous parameters (heat radiation, thermal dose, overpressure, chemical concentrations). A decision on the boundary values for three risk zones is based on the vulnerability analysis that covered a variety of accident scenarios containing the release of a toxic or flammable substance which either evaporates, ignites and/or explodes. Critical values are selected for the boundary definition of the Red, Orange and Yellow risk zones upon the examination of harmful effects that are likely to cause injuries of varying severity to people and different levels of damage to structures. The obtained results provide the basis for creating a comprehensive real-time risk map for a decision support at CBRNE operations.

Keywords: Boundary values, CBRNE threats, decision making process, hazardous effects, vulnerability analysis, risk zones.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 440
4855 Simulating the Dynamics of Distribution of Hazardous Substances Emitted by Motor Engines in a Residential Quarter

Authors: S. Grishin

Abstract:

This article is dedicated to development of mathematical models for determining the dynamics of concentration of hazardous substances in urban turbulent atmosphere. Development of the mathematical models implied taking into account the time-space variability of the fields of meteorological items and such turbulent atmosphere data as vortex nature, nonlinear nature, dissipativity and diffusivity. Knowing the turbulent airflow velocity is not assumed when developing the model. However, a simplified model implies that the turbulent and molecular diffusion ratio is a piecewise constant function that changes depending on vertical distance from the earth surface. Thereby an important assumption of vertical stratification of urban air due to atmospheric accumulation of hazardous substances emitted by motor vehicles is introduced into the mathematical model. The suggested simplified non-linear mathematical model of determining the sought exhaust concentration at a priori unknown turbulent flow velocity through non-degenerate transformation is reduced to the model which is subsequently solved analytically.

Keywords: Urban ecology, time-dependent mathematical model, exhaust concentration, turbulent and molecular diffusion, airflow velocity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1411
4854 Interdisciplinary Principles of Field-Like Coordination in the Case of Self-Organized Social Systems1

Authors: D. Plikynas, S. Masteika, A. Budrionis

Abstract:

This interdisciplinary research aims to distinguish universal scale-free and field-like fundamental principles of selforganization observable across many disciplines like computer science, neuroscience, microbiology, social science, etc. Based on these universal principles we provide basic premises and postulates for designing holistic social simulation models. We also introduce pervasive information field (PIF) concept, which serves as a simulation media for contextual information storage, dynamic distribution and organization in social complex networks. PIF concept specifically is targeted for field-like uncoupled and indirect interactions among social agents capable of affecting and perceiving broadcasted contextual information. Proposed approach is expressive enough to represent contextual broadcasted information in a form locally accessible and immediately usable by network agents. This paper gives some prospective vision how system-s resources (tangible and intangible) could be simulated as oscillating processes immersed in the all pervasive information field.

Keywords: field-based coordination, multi-agent systems, information-rich social networks, pervasive information field

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1566
4853 Wireless Sensor Network to Help Low Incomes Farmers to Face Drought Impacts

Authors: Fantazi Walid, Ezzedine Tahar, Bargaoui Zoubeida

Abstract:

This research presents the main ideas to implement an intelligent system composed by communicating wireless sensors measuring environmental data linked to drought indicators (such as air temperature, soil moisture , etc...). On the other hand, the setting up of a spatio temporal database communicating with a Web mapping application for a monitoring in real time in activity 24:00 /day, 7 days/week is proposed to allow the screening of the drought parameters time evolution and their extraction. Thus this system helps detecting surfaces touched by the phenomenon of drought. Spatio-temporal conceptual models seek to answer the users who need to manage soil water content for irrigating or fertilizing or other activities pursuing crop yield augmentation. Effectively, spatiotemporal conceptual models enable users to obtain a diagram of readable and easy data to apprehend. Based on socio-economic information, it helps identifying people impacted by the phenomena with the corresponding severity especially that this information is accessible by farmers and stakeholders themselves. The study will be applied in Siliana watershed Northern Tunisia.

Keywords: WSN, database spatio-temporal, GIS, web-mapping, indicator of drought.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2445
4852 Computer Aided Diagnostic System for Detection and Classification of a Brain Tumor through MRI Using Level Set Based Segmentation Technique and ANN Classifier

Authors: Atanu K Samanta, Asim Ali Khan

Abstract:

Due to the acquisition of huge amounts of brain tumor magnetic resonance images (MRI) in clinics, it is very difficult for radiologists to manually interpret and segment these images within a reasonable span of time. Computer-aided diagnosis (CAD) systems can enhance the diagnostic capabilities of radiologists and reduce the time required for accurate diagnosis. An intelligent computer-aided technique for automatic detection of a brain tumor through MRI is presented in this paper. The technique uses the following computational methods; the Level Set for segmentation of a brain tumor from other brain parts, extraction of features from this segmented tumor portion using gray level co-occurrence Matrix (GLCM), and the Artificial Neural Network (ANN) to classify brain tumor images according to their respective types. The entire work is carried out on 50 images having five types of brain tumor. The overall classification accuracy using this method is found to be 98% which is significantly good.

Keywords: Artificial neural network, ANN, brain tumor, computer-aided diagnostic, CAD system, gray-level co-occurrence matrix, GLCM, level set method, tumor segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1365
4851 Innovation in Lean Thinking to Achieve Rapid Construction

Authors: Muhamad Azani Yahya, Vikneswaran Munikanan, Mohammed Alias Yusof

Abstract:

Lean thinking holds the potential for improving the construction sector, and therefore, it is a concept that should be adopted by construction sector players and academicians in the real industry. Bridging from that, a learning process for construction sector players regarding this matter should be the agenda in gaining the knowledge in preparation for their career. Lean principles offer opportunities for reducing lead times, eliminating non-value adding activities, reducing variability, and are facilitated by methods such as pull scheduling, simplified operations and buffer reduction. Thus, the drive for rapid construction, which is a systematic approach in enhancing efficiency to deliver a project using time reduction, while lean is the continuous process of eliminating waste, meeting or exceeding all customer requirements, focusing on the entire value stream and pursuing perfection in the execution of a constructed project. The methodology presented is shown to be valid through literature, interviews and questionnaire. The results show that the majority of construction sector players unfamiliar with lean thinking and they agreed that it can improve the construction process flow. With this background knowledge established and identified, best practices and recommended action are drawn.

Keywords: Construction improvement, rapid construction, time reduction, lean construction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1306
4850 Time-Dependent Behavior of Damaged Reinforced Concrete Shear Walls Strengthened with Composite Plates Having Variable Fibers Spacing

Authors: R. Yeghnem, L. Boulefrakh, S. A. Meftah, A. Tounsi, E. A. Adda Bedia

Abstract:

In this study, the time-dependent behavior of damaged reinforced concrete shear wall structures strengthened with composite plates having variable fibers spacing was investigated to analyze their seismic response. In the analytical formulation, the adherent and the adhesive layers are all modeled as shear walls, using the mixed Finite Element Method (FEM). The anisotropic damage model is adopted to describe the damage extent of the Reinforced Concrete shear walls. The phenomenon of creep and shrinkage of concrete has been determined by Eurocode 2. Large earthquakes recorded in Algeria (El-Asnam and Boumerdes) have been tested to demonstrate the accuracy of the proposed method. Numerical results are obtained for non-uniform distributions of carbon fibers in epoxy matrices. The effects of damage extent and the delay mechanism creep and shrinkage of concrete are highlighted. Prospects are being studied.

Keywords: RC shear wall structures, composite plates, creep and shrinkage, damaged reinforced concrete structures, finite element method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1672
4849 Capacity Optimization for Local and Cooperative Spectrum Sensing in Cognitive Radio Networks

Authors: Ayman A. El-Saleh, Mahamod Ismail, Mohd. A. M. Ali, Ahmed N. H. Alnuaimy

Abstract:

The dynamic spectrum allocation solutions such as cognitive radio networks have been proposed as a key technology to exploit the frequency segments that are spectrally underutilized. Cognitive radio users work as secondary users who need to constantly and rapidly sense the presence of primary users or licensees to utilize their frequency bands if they are inactive. Short sensing cycles should be run by the secondary users to achieve higher throughput rates as well as to provide low level of interference to the primary users by immediately vacating their channels once they have been detected. In this paper, the throughput-sensing time relationship in local and cooperative spectrum sensing has been investigated under two distinct scenarios, namely, constant primary user protection (CPUP) and constant secondary user spectrum usability (CSUSU) scenarios. The simulation results show that the design of sensing slot duration is very critical and depends on the number of cooperating users under CPUP scenario whereas under CSUSU, cooperating more users has no effect if the sensing time used exceeds 5% of the total frame duration.

Keywords: Capacity, cognitive radio, optimization, spectrumsensing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1614
4848 A TFETI Domain Decompositon Solver for Von Mises Elastoplasticity Model with Combination of Linear Isotropic-Kinematic Hardening

Authors: Martin Cermak, Stanislav Sysala

Abstract:

In this paper we present the efficient parallel implementation of elastoplastic problems based on the TFETI (Total Finite Element Tearing and Interconnecting) domain decomposition method. This approach allow us to use parallel solution and compute this nonlinear problem on the supercomputers and decrease the solution time and compute problems with millions of DOFs. In our approach we consider an associated elastoplastic model with the von Mises plastic criterion and the combination of linear isotropic-kinematic hardening law. This model is discretized by the implicit Euler method in time and by the finite element method in space. We consider the system of nonlinear equations with a strongly semismooth and strongly monotone operator. The semismooth Newton method is applied to solve this nonlinear system. Corresponding linearized problems arising in the Newton iterations are solved in parallel by the above mentioned TFETI. The implementation of this problem is realized in our in-house MatSol packages developed in MatLab.

Keywords: Isotropic-kinematic hardening, TFETI, domain decomposition, parallel solution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1759
4847 Comparing Test Equating by Item Response Theory and Raw Score Methods with Small Sample Sizes on a Study of the ARTé: Mecenas Learning Game

Authors: Steven W. Carruthers

Abstract:

The purpose of the present research is to equate two test forms as part of a study to evaluate the educational effectiveness of the ARTé: Mecenas art history learning game. The researcher applied Item Response Theory (IRT) procedures to calculate item, test, and mean-sigma equating parameters. With the sample size n=134, test parameters indicated “good” model fit but low Test Information Functions and more acute than expected equating parameters. Therefore, the researcher applied equipercentile equating and linear equating to raw scores and compared the equated form parameters and effect sizes from each method. Item scaling in IRT enables the researcher to select a subset of well-discriminating items. The mean-sigma step produces a mean-slope adjustment from the anchor items, which was used to scale the score on the new form (Form R) to the reference form (Form Q) scale. In equipercentile equating, scores are adjusted to align the proportion of scores in each quintile segment. Linear equating produces a mean-slope adjustment, which was applied to all core items on the new form. The study followed a quasi-experimental design with purposeful sampling of students enrolled in a college level art history course (n=134) and counterbalancing design to distribute both forms on the pre- and posttests. The Experimental Group (n=82) was asked to play ARTé: Mecenas online and complete Level 4 of the game within a two-week period; 37 participants completed Level 4. Over the same period, the Control Group (n=52) did not play the game. The researcher examined between group differences from post-test scores on test Form Q and Form R by full-factorial Two-Way ANOVA. The raw score analysis indicated a 1.29% direct effect of form, which was statistically non-significant but may be practically significant. The researcher repeated the between group differences analysis with all three equating methods. For the IRT mean-sigma adjusted scores, form had a direct effect of 8.39%. Mean-sigma equating with a small sample may have resulted in inaccurate equating parameters. Equipercentile equating aligned test means and standard deviations, but resultant skewness and kurtosis worsened compared to raw score parameters. Form had a 3.18% direct effect. Linear equating produced the lowest Form effect, approaching 0%. Using linearly equated scores, the researcher conducted an ANCOVA to examine the effect size in terms of prior knowledge. The between group effect size for the Control Group versus Experimental Group participants who completed the game was 14.39% with a 4.77% effect size attributed to pre-test score. Playing and completing the game increased art history knowledge, and individuals with low prior knowledge tended to gain more from pre- to post test. Ultimately, researchers should approach test equating based on their theoretical stance on Classical Test Theory and IRT and the respective  assumptions. Regardless of the approach or method, test equating requires a representative sample of sufficient size. With small sample sizes, the application of a range of equating approaches can expose item and test features for review, inform interpretation, and identify paths for improving instruments for future study.

Keywords: Effectiveness, equipercentile equating, IRT, learning games, linear equating, mean-sigma equating.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1015
4846 Compressible Lattice Boltzmann Method for Turbulent Jet Flow Simulations

Authors: K. Noah, F.-S. Lien

Abstract:

In Computational Fluid Dynamics (CFD), there are a variety of numerical methods, of which some depend on macroscopic model representatives. These models can be solved by finite-volume, finite-element or finite-difference methods on a microscopic description. However, the lattice Boltzmann method (LBM) is considered to be a mesoscopic particle method, with its scale lying between the macroscopic and microscopic scales. The LBM works well for solving incompressible flow problems, but certain limitations arise from solving compressible flows, particularly at high Mach numbers. An improved lattice Boltzmann model for compressible flow problems is presented in this research study. A higher-order Taylor series expansion of the Maxwell equilibrium distribution function is used to overcome limitations in LBM when solving high-Mach-number flows. Large eddy simulation (LES) is implemented in LBM to simulate turbulent jet flows. The results have been validated with available experimental data for turbulent compressible free jet flow at subsonic speeds.

Keywords: Compressible lattice Boltzmann metho-, large eddy simulation, turbulent jet flows.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 954
4845 The Measurement of Latvian and Russian Ethnic Attitudes, Using Evaluative Priming Task and Self-Report Methods

Authors: Maria Bambulyaka, Irina Plotka, Nina Blumenau, Dmitry Igonin, Elena Ozola, Laura Shimane

Abstract:

The purposes of researches - to estimate implicit ethnic attitudes by direct and indirect methods, to determine the accordance of two types measuring, to investigate influence of task type used in an experiment, on the results of measuring, as well as to determine a presence or communication between recent episodic events and chronologic correlations of ethnic attitudes. Method of the implicit measuring - an evaluative priming (EPT) carried out with the use of different SOA intervals, explicit methods of research are G.Soldatova-s types of ethnic identity, G.Soldatova-s index of tolerance, E.Bogardus scale of social distance. During five stages of researches received results open some aspects of implicit measuring, its correlation with the results of self-reports on different SOA intervals, connection of implicit measuring with emotional valence of episodic events of participants and other indexes, presenting a contribution to the decision of implicit measuring application problem for study of different social constructs

Keywords: Ethnic attitudes, explicit method, implicit method, priming.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1592
4844 WhatsApp as Part of a Blended Learning Model to Help Programming Novices

Authors: Tlou J. Ramabu

Abstract:

Programming is one of the challenging subjects in the field of computing. In the higher education sphere, some programming novices’ performance, retention rate, and success rate are not improving. Most of the time, the problem is caused by the slow pace of learning, difficulty in grasping the syntax of the programming language and poor logical skills. More importantly, programming forms part of major subjects within the field of computing. As a result, specialized pedagogical methods and innovation are highly recommended. Little research has been done on the potential productivity of the WhatsApp platform as part of a blended learning model. In this article, the authors discuss the WhatsApp group as a part of blended learning model incorporated for a group of programming novices. We discuss possible administrative activities for productive utilisation of the WhatsApp group on the blended learning overview. The aim is to take advantage of the popularity of WhatsApp and the time students spend on it for their educational purpose. We believe that blended learning featuring a WhatsApp group may ease novices’ cognitive load and strengthen their foundational programming knowledge and skills. This is a work in progress as the proposed blended learning model with WhatsApp incorporated is yet to be implemented.

Keywords: Blended learning, higher education, WhatsApp, programming, novices, lecturers.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1177
4843 Failure Analysis of Pipe System at a Hydroelectric Power Plant

Authors: Ali Göksenli, Barlas Eryürek

Abstract:

In this study, failure analysis of pipe system at a micro hydroelectric power plant is investigated. Failure occurred at the pipe system in the powerhouse during shut down operation of the water flow by a valve. This locking had caused a sudden shock wave, also called “Water-hammer effect”, resulting in noise and inside pressure increase. After visual investigation of the effect of the shock wave on the system, a circumference crack was observed at the pipe flange weld region. To establish the reason for crack formation, calculations of pressure and stress values at pipe, flange and welding seams were carried out and concluded that safety factor was high (2.2), indicating that no faulty design existed. By further analysis, pipe system and hydroelectric power plant was examined. After observations it is determined that the plant did not include a ventilation nozzle (air trap), that prevents the system of sudden pressure increase inside the pipes which is caused by water-hammer effect. Analyses were carried out to identify the influence of water-hammer effect on inside pressure increase and it was concluded that, according Jowkowsky’s equation, shut down time is effective on inside pressure increase. The valve closing time was uncertain but by a shut down time of even one minute, inside pressure would increase by 7.6 bar (working pressure was 34.6 bar). Detailed investigations were also carried out on the assembly of the pipe-flange system by considering technical drawings. It was concluded that the pipe-flange system was not installed according to the instructions. Two of five weld seams were not applied and one weld was carried out faulty. This incorrect and inadequate weld seams resulted in; insufficient connection of the pipe to the flange constituting a strong notch effect at weld seam regions, increase in stress values and the decrease of strength and safety factor.

Keywords: Failure analysis, hydroelectric plant, water-hammer, crack, welding seam.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2736
4842 Operation Parameters of Vacuum Cleaned Filters

Authors: Wilhelm Hoeflinger, Thomas Laminger, Johannes Wolfslehner

Abstract:

For vacuum cleaned dust filters there exist no calculation methods to determine design parameters (e.g. traverse velocity of the nozzle, filter area…). In this work a method to calculate the optimum traverse velocity of the nozzle of an industrial-size flat dust filter at a given mean pressure drop and filter face velocity was elaborated. Well-known equations for the design of a cleanable multi-chamber bag-house-filter were modified in order to take into account a continuously regeneration of a dust filter by a nozzle. Thereby, the specific filter medium resistance and the specific cake resistance values are needed which can be derived from filter tests under constant operation conditions.

A lab-scale filter test rig was used to derive the specific filter media resistance value and the specific cake resistance value for vacuum cleaned filter operation. Three different filter media were tested and the determined parameters were compared to each other.

Keywords: Design of dust filter, Dust removing, Filter regeneration, Operation parameters.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1934
4841 Data Integrity: Challenges in Health Information Systems in South Africa

Authors: T. Thulare, M. Herselman, A. Botha

Abstract:

Poor system use, including inappropriate design of health information systems, causes difficulties in communication with patients and increased time spent by healthcare professionals in recording the necessary health information for medical records. System features like pop-up reminders, complex menus, and poor user interfaces can make medical records far more time consuming than paper cards as well as affect decision-making processes. Although errors associated with health information and their real and likely effect on the quality of care and patient safety have been documented for many years, more research is needed to measure the occurrence of these errors and determine the causes to implement solutions. Therefore, the purpose of this paper is to identify data integrity challenges in hospital information systems through a scoping review and based on the results provide recommendations on how to manage these. Only 34 papers were found to be most suitable out of 297 publications initially identified in the field. The results indicated that human and computerized systems are the most common challenges associated with data integrity and factors such as policy, environment, health workforce, and lack of awareness attribute to these challenges but if measures are taken the data integrity challenges can be managed.

Keywords: Data integrity, data integrity challenges, hospital information systems, South Africa.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1376
4840 Encryption Efficiency Analysis and Security Evaluation of RC6 Block Cipher for Digital Images

Authors: Hossam El-din H. Ahmed, Hamdy M. Kalash, Osama S. Farag Allah

Abstract:

This paper investigates the encryption efficiency of RC6 block cipher application to digital images, providing a new mathematical measure for encryption efficiency, which we will call the encryption quality instead of visual inspection, The encryption quality of RC6 block cipher is investigated among its several design parameters such as word size, number of rounds, and secret key length and the optimal choices for the best values of such design parameters are given. Also, the security analysis of RC6 block cipher for digital images is investigated from strict cryptographic viewpoint. The security estimations of RC6 block cipher for digital images against brute-force, statistical, and differential attacks are explored. Experiments are made to test the security of RC6 block cipher for digital images against all aforementioned types of attacks. Experiments and results verify and prove that RC6 block cipher is highly secure for real-time image encryption from cryptographic viewpoint. Thorough experimental tests are carried out with detailed analysis, demonstrating the high security of RC6 block cipher algorithm. So, RC6 block cipher can be considered to be a real-time secure symmetric encryption for digital images.

Keywords: Block cipher, Image encryption, Encryption quality, and Security analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2425
4839 A Partially Accelerated Life Test Planning with Competing Risks and Linear Degradation Path under Tampered Failure Rate Model

Authors: Fariba Azizi, Firoozeh Haghighi, Viliam Makis

Abstract:

In this paper, we propose a method to model the relationship between failure time and degradation for a simple step stress test where underlying degradation path is linear and different causes of failure are possible. It is assumed that the intensity function depends only on the degradation value. No assumptions are made about the distribution of the failure times. A simple step-stress test is used to shorten failure time of products and a tampered failure rate (TFR) model is proposed to describe the effect of the changing stress on the intensities. We assume that some of the products that fail during the test have a cause of failure that is only known to belong to a certain subset of all possible failures. This case is known as masking. In the presence of masking, the maximum likelihood estimates (MLEs) of the model parameters are obtained through an expectation-maximization (EM) algorithm by treating the causes of failure as missing values. The effect of incomplete information on the estimation of parameters is studied through a Monte-Carlo simulation. Finally, a real example is analyzed to illustrate the application of the proposed methods.

Keywords: Expectation-maximization (EM) algorithm, cause of failure, intensity, linear degradation path, masked data, reliability function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1073
4838 Joint Training Offer Selection and Course Timetabling Problems: Models and Algorithms

Authors: Gianpaolo Ghiani, Emanuela Guerriero, Emanuele Manni, Alessandro Romano

Abstract:

In this article, we deal with a variant of the classical course timetabling problem that has a practical application in many areas of education. In particular, in this paper we are interested in high schools remedial courses. The purpose of such courses is to provide under-prepared students with the skills necessary to succeed in their studies. In particular, a student might be under prepared in an entire course, or only in a part of it. The limited availability of funds, as well as the limited amount of time and teachers at disposal, often requires schools to choose which courses and/or which teaching units to activate. Thus, schools need to model the training offer and the related timetabling, with the goal of ensuring the highest possible teaching quality, by meeting the above-mentioned financial, time and resources constraints. Moreover, there are some prerequisites between the teaching units that must be satisfied. We first present a Mixed-Integer Programming (MIP) model to solve this problem to optimality. However, the presence of many peculiar constraints contributes inevitably in increasing the complexity of the mathematical model. Thus, solving it through a general-purpose solver may be performed for small instances only, while solving real-life-sized instances of such model requires specific techniques or heuristic approaches. For this purpose, we also propose a heuristic approach, in which we make use of a fast constructive procedure to obtain a feasible solution. To assess our exact and heuristic approaches we perform extensive computational results on both real-life instances (obtained from a high school in Lecce, Italy) and randomly generated instances. Our tests show that the MIP model is never solved to optimality, with an average optimality gap of 57%. On the other hand, the heuristic algorithm is much faster (in about the 50% of the considered instances it converges in approximately half of the time limit) and in many cases allows achieving an improvement on the objective function value obtained by the MIP model. Such an improvement ranges between 18% and 66%.

Keywords: Heuristic, MIP model, Remedial course, School, Timetabling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1634
4837 Performance of Neural Networks vs. Radial Basis Functions When Forming a Metamodel for Residential Buildings

Authors: Philip Symonds, Jon Taylor, Zaid Chalabi, Michael Davies

Abstract:

Average temperatures worldwide are expected to continue to rise. At the same time, major cities in developing countries are becoming increasingly populated and polluted. Governments are tasked with the problem of overheating and air quality in residential buildings. This paper presents the development of a model, which is able to estimate the occupant exposure to extreme temperatures and high air pollution within domestic buildings. Building physics simulations were performed using the EnergyPlus building physics software. An accurate metamodel is then formed by randomly sampling building input parameters and training on the outputs of EnergyPlus simulations. Metamodels are used to vastly reduce the amount of computation time required when performing optimisation and sensitivity analyses. Neural Networks (NNs) have been compared to a Radial Basis Function (RBF) algorithm when forming a metamodel. These techniques were implemented using the PyBrain and scikit-learn python libraries, respectively. NNs are shown to perform around 15% better than RBFs when estimating overheating and air pollution metrics modelled by EnergyPlus.

Keywords: Neural Networks, Radial Basis Functions, Metamodelling, Python machine learning libraries.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2117
4836 Oil Recovery Study by Low Temperature Carbon Dioxide Injection in High-Pressure High-Temperature Micromodels

Authors: Zakaria Hamdi, Mariyamni Awang

Abstract:

For the past decades, CO2 flooding has been used as a successful method for enhanced oil recovery (EOR). However, high mobility ratio and fingering effect are considered as important drawbacka of this process. Low temperature injection of CO2 into high temperature reservoirs may improve the oil recovery, but simulating multiphase flow in the non-isothermal medium is difficult, and commercial simulators are very unstable in these conditions. Furthermore, to best of authors’ knowledge, no experimental work was done to verify the results of the simulations and to understand the pore-scale process. In this paper, we present results of investigations on injection of low temperature CO2 into a high-pressure high-temperature micromodel with injection temperature range from 34 to 75 °F. Effect of temperature and saturation changes of different fluids are measured in each case. The results prove the proposed method. The injection of CO2 at low temperatures increased the oil recovery in high temperature reservoirs significantly. Also, CO2 rich phases available in the high temperature system can affect the oil recovery through the better sweep of the oil which is initially caused by penetration of LCO2 inside the system. Furthermore, no unfavorable effect was detected using this method. Low temperature CO2 is proposed to be used as early as secondary recovery.

Keywords: Enhanced oil recovery, CO2 flooding, micromodel studies, miscible flooding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1146
4835 Biomass and Pigment Production by Monascus during Miniaturized Submerged Culture on Adlay

Authors: Supavej Maniyom, Gerard H. Markx

Abstract:

Three reactor types were explored and successfully used for pigment production by Monascus: shake flasks, and shaken and stirred miniaturized reactors. Also, the use of dielectric spectroscopy for the on-line measurement of biomass levels was explored. Shake flasks gave good pigment yields, but scale up is difficult, and they cannot be automated. Shaken bioreactors were less successful with pigment production than stirred reactors. Experiments with different impeller speeds in different volumes of liquid in the reactor confirmed that this is most likely due oxygen availability. The availability of oxygen appeared to affect biomass levels less than pigment production; red pigment production in particular needed very high oxygen levels. Dielectric spectroscopy was effectively used to continuously measure biomass levels during the submerged fungal fermentation in the shaken and stirred miniaturized bioreactors, despite the presence of the solid substrate particles. Also, the capacitance signal gave useful information about the viability of the cells in the culture.

Keywords: Chinese pearl barley, miniature submerged culture, Monascus pigment, biomass, capacitance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2772