Search results for: Code Breakers
530 Flexible, Adaptable and Scaleable Business Rules Management System for Data Validation
Authors: Kashif Kamran, Farooque Azam
Abstract:
The policies governing the business of any organization are well reflected in her business rules. The business rules are implemented by data validation techniques, coded during the software development process. Any change in business policies results in change in the code written for data validation used to enforce the business policies. Implementing the change in business rules without changing the code is the objective of this paper. The proposed approach enables users to create rule sets at run time once the software has been developed. The newly defined rule sets by end users are associated with the data variables for which the validation is required. The proposed approach facilitates the users to define business rules using all the comparison operators and Boolean operators. Multithreading is used to validate the data entered by end user against the business rules applied. The evaluation of the data is performed by a newly created thread using an enhanced form of the RPN (Reverse Polish Notation) algorithm.Keywords: Business Rules, data validation, multithreading, Reverse Polish Notation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2270529 Effective Relay Communication for Scalable Video Transmission
Authors: Jung Ah Park, Zhijie Zhao, Doug Young Suh, Joern Ostermann
Abstract:
In this paper, we propose an effective relay communication for layered video transmission as an alternative to make the most of limited resources in a wireless communication network where loss often occurs. Relaying brings stable multimedia services to end clients, compared to multiple description coding (MDC). Also, retransmission of only parity data about one or more video layer using channel coder to the end client of the relay device is paramount to the robustness of the loss situation. Using these methods in resource-constrained environments, such as real-time user created content (UCC) with layered video transmission, can provide high-quality services even in a poor communication environment. Minimal services are also possible. The mathematical analysis shows that the proposed method reduced the probability of GOP loss rate compared to MDC and raptor code without relay. The GOP loss rate is about zero, while MDC and raptor code without relay have a GOP loss rate of 36% and 70% in case of 10% frame loss rate.Keywords: Relay communication, Multiple Description Coding, Scalable Video Coding
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1436528 Study of the Late Phase of Core Degradation during Reflooding by Safety Injection System for VVER1000 with ASTECv2 Computer Code
Authors: Antoaneta Stefanova, Rositsa Gencheva, Pavlin Groudev
Abstract:
This paper presents the modeling approach in SBO sequence for VVER 1000 reactors and describes the reactor core behavior at late in-vessel phase in case of late reflooding by HPIS and gives preliminary results for the ASTECv2 validation. The work is focused on investigation of plant behavior during total loss of power and the operator actions. The main goal of these analyses is to assess the phenomena arising during the Station blackout (SBO) followed by primary side high pressure injection system (HPIS) reflooding of already damaged reactor core at very late “in-vessel” phase. The purpose of the analyses is to define how the later HPIS switching on can delay the time of vessel failure or possibly avoid vessel failure. The times for HPP injection were chosen based on previously performed investigations.Keywords: VVER, operator action validation, reflooding of overheated reactor core, ASTEC computer code.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1441527 Analysis of MAC Protocols with Correlation Receiver for OCDMA Networks - Part II
Authors: Shivaleela E. S., Shrikant S. Tangade
Abstract:
In this paper optical code-division multiple-access (OCDMA) packet network is considered, which offers inherent security in the access networks. Two types of random access protocols are proposed for packet transmission. In protocol 1, all distinct codes and in protocol 2, distinct codes as well as shifted versions of all these codes are used. O-CDMA network performance using optical orthogonal codes (OOCs) 1-D and two-dimensional (2-D) wavelength/time single-pulse-per-row (W/T SPR) codes are analyzed. The main advantage of using 2-D codes instead of onedimensional (1-D) codes is to reduce the errors due to multiple access interference among different users. In this paper, correlation receiver is considered in the analysis. Using analytical model, we compute and compare packet-success probability for 1-D and 2-D codes in an O-CDMA network and the analysis shows improved performance with 2-D codes as compared to 1-D codes.
Keywords: Optical code-division multiple-access, optical CDMA correlation receiver, wavelength/time optical CDMA codes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1393526 Numerical Investigation of Indoor Air Quality and Thermal Comfort in a Ventilated Room
Authors: Ramy H. Mohammed
Abstract:
Understanding the behavior of airflow in a room is essential for building designers to provide the most efficient design of ventilation system, and having acceptable indoor air quality. This trend is the motive to solve the relationship between airflow parameters and thermal comfort. This paper investigates airflow characteristics, indoor air quality (IAQ), and the thermal comfort (TC) in a ventilated room with a displacement ventilation system using three dimensional CFD code [AirPak 2.0.6]. After validation of the code, a numerical study is executed for a typical room with dimensions of 5m by 3m by 3m height according to a variety of supply air velocities, supply air temperature and supply air relative humidity. The finite volume method and the indoor zero equation turbulence models are employed for solving the governing equations numerically. The temperature field and the mean age of air (MAA) in the modeled room for a displacement ventilation system are determined according to a variety of the above parameters. The variable air volume (VAV) systems with different supply air velocity are applicable to control room air temperature for a displacement ventilation system.
Keywords: Displacement ventilation, AirPak, Indoor zero equation, MAA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3128525 Analytical Mathematical Expression for the Channel Capacity of a Power and Rate Simultaneous Adaptive Cellular DS/FFH-CDMA Systemin a Rayleigh Fading Channel
Authors: P.Varzakas
Abstract:
In this paper, an accurate theoretical analysis for the achievable average channel capacity (in the Shannon sense) per user of a hybrid cellular direct-sequence/fast frequency hopping code-division multiple-access (DS/FFH-CDMA) system operating in a Rayleigh fading environment is presented. The analysis covers the downlink operation and leads to the derivation of an exact mathematical expression between the normalized average channel capacity available to each system-s user, under simultaneous optimal power and rate adaptation and the system-s parameters, as the number of hops per bit, the processing gain applied, the number of users per cell and the received signal-tonoise power ratio over the signal bandwidth. Finally, numerical results are presented to illustrate the proposed mathematical analysis.
Keywords: Shannon capacity, adaptive systems, code-division multiple access, fading channels.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1525524 Experimental Results about the Dynamics of the Generalized Belief Propagation Used on LDPC Codes
Authors: Jean-Christophe Sibel, Sylvain Reynal, David Declercq
Abstract:
In the context of channel coding, the Generalized Belief Propagation (GBP) is an iterative algorithm used to recover the transmission bits sent through a noisy channel. To ensure a reliable transmission, we apply a map on the bits, that is called a code. This code induces artificial correlations between the bits to send, and it can be modeled by a graph whose nodes are the bits and the edges are the correlations. This graph, called Tanner graph, is used for most of the decoding algorithms like Belief Propagation or Gallager-B. The GBP is based on a non unic transformation of the Tanner graph into a so called region-graph. A clear advantage of the GBP over the other algorithms is the freedom in the construction of this graph. In this article, we explain a particular construction for specific graph topologies that involves relevant performance of the GBP. Moreover, we investigate the behavior of the GBP considered as a dynamic system in order to understand the way it evolves in terms of the time and in terms of the noise power of the channel. To this end we make use of classical measures and we introduce a new measure called the hyperspheres method that enables to know the size of the attractors.
Keywords: iterative decoder, LDPC, region-graph, chaos.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1648523 Economic Returns of Using Brewery`s Spent Grain in Animal Feed
Authors: U. Ben-Hamed, H. Seddighi, K. Thomas
Abstract:
UK breweries generate extensive by products in the form of spent grain, slurry and yeast. Much of the spent grain is produced by large breweries and processed in bulk for animal feed. Spent brewery grains contain up to 20% protein dry weight and up to 60% fiber and are useful additions to animal feed. Bulk processing is economic and allows spent grain to be sold so providing an income to the brewery. A proportion of spent grain, however, is produced by small local breweries and is more variably distributed to farms or other users using intermittent collection methods. Such use is much less economic and may incur losses if not carefully assessed for transport costs. This study reports an economic returns of using wet brewery spent grain (WBSG) in animal feed using the Co-product Optimizer Decision Evaluator model (Cattle CODE) developed by the University of Nebraska to predict performance and economic returns when byproducts are fed to finishing cattle. The results indicated that distance from brewery to farm had a significantly greater effect on the economics of use of small brewery spent grain and that alternative uses than cattle feed may be important to develop.Keywords: Animal Feed, Brewery Spent Grains, cattle CODE, Economic returns.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7816522 A Comparative Study of Turbulence Models Performance for Turbulent Flow in a Planar Asymmetric Diffuser
Authors: Samy M. El-Behery, Mofreh H. Hamed
Abstract:
This paper presents a computational study of the separated flow in a planer asymmetric diffuser. The steady RANS equations for turbulent incompressible fluid flow and six turbulence closures are used in the present study. The commercial software code, FLUENT 6.3.26, was used for solving the set of governing equations using various turbulence models. Five of the used turbulence models are available directly in the code while the v2-f turbulence model was implemented via User Defined Scalars (UDS) and User Defined Functions (UDF). A series of computational analysis is performed to assess the performance of turbulence models at different grid density. The results show that the standard k-ω, SST k-ω and v2-f models clearly performed better than other models when an adverse pressure gradient was present. The RSM model shows an acceptable agreement with the velocity and turbulent kinetic energy profiles but it failed to predict the location of separation and attachment points. The standard k-ε and the low-Re k- ε delivered very poor results.
Keywords: Turbulence models, turbulent flow, wall functions, separation, reattachment, diffuser.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3768521 A Model for Test Case Selection in the Software-Development Life Cycle
Authors: Adtha Lawanna
Abstract:
Software maintenance is one of the essential processes of Software-Development Life Cycle. The main philosophies of retaining software concern the improvement of errors, the revision of codes, the inhibition of future errors, and the development in piece and capacity. While the adjustment has been employing, the software structure has to be retested to an upsurge a level of assurance that it will be prepared due to the requirements. According to this state, the test cases must be considered for challenging the revised modules and the whole software. A concept of resolving this problem is ongoing by regression test selection such as the retest-all selections, random/ad-hoc selection and the safe regression test selection. Particularly, the traditional techniques concern a mapping between the test cases in a test suite and the lines of code it executes. However, there are not only the lines of code as one of the requirements that can affect the size of test suite but including the number of functions and faulty versions. Therefore, a model for test case selection is developed to cover those three requirements by the integral technique which can produce the smaller size of the test cases when compared with the traditional regression selection techniques.
Keywords: Software maintenance, regression test selection, test case.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1697520 A Model for Test Case Selection in the Software-Development Life Cycle
Authors: Adtha Lawanna
Abstract:
Software maintenance is one of the essential processes of Software-Development Life Cycle. The main philosophies of retaining software concern the improvement of errors, the revision of codes, the inhibition of future errors, and the development in piece and capacity. While the adjustment has been employing, the software structure has to be retested to an upsurge a level of assurance that it will be prepared due to the requirements. According to this state, the test cases must be considered for challenging the revised modules and the whole software. A concept of resolving this problem is ongoing by regression test selection such as the retest-all selections, random/ad-hoc selection and the safe regression test selection. Particularly, the traditional techniques concern a mapping between the test cases in a test suite and the lines of code it executes. However, there are not only the lines of code as one of the requirements that can affect the size of test suite but including the number of functions and faulty versions. Therefore, a model for test case selection is developed to cover those three requirements by the integral technique which can produce the smaller size of the test cases when compared with the traditional regression selection techniques.
Keywords: Software maintenance, regression test selection, test case.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1599519 Does the Adoption of IFRS Influence Earnings Management towards Small Positive Profits? Evidence from Emerging Markets
Authors: Sawcen Chebaane, Hakim Ben Othman
Abstract:
This paper investigates the effect of International Financial Reporting Standards (IFRS) adoption on the frequency of earnings managements towards small positive profits. We focus on two emerging markets IFRS adopters: South Africa and Turkey. We tested our logistic regression using appropriate panelestimation techniques over a sample of 330 South African and 210 Turkish firm-year observations over the period 2002-2008. Our results document that mandatory adoption of IFRS is not associated with a reduction in earnings management towards small positive profits in emerging markets. These results contradict most of the previous findings of the studies conducted in developed countries. Based on the legal system factor, we compare the intensity of earnings management between a code law country (Turkey) and a common law country (South Africa) over the pre and post-adoption periods. Our findings show that the frequency of such earnings management practice increases significantly for the code law country.Keywords: Civil law, common law, emerging markets, Mandatory IFRS adoption, small positive profits.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3201518 Design, Development and Implementation of aTemperature Sensor using Zigbee Concepts
Authors: T.C.Manjunath, Ph.D., Ashok Kusagur, Shruthi Sanjay, Saritha Sindushree, C. Ardil
Abstract:
This paper deals with the design, development & implementation of a temperature sensor using zigbee. The main aim of the work undertaken in this paper is to sense the temperature and to display the result on the LCD using the zigbee technology. ZigBee operates in the industrial, scientific and medical (ISM) radio bands; 868 MHz in Europe, 915 MHz in the USA and 2.4 GHz in most jurisdictions worldwide. The technology is intended to be simpler and cheaper than other WPANs such as Bluetooth. The most capable ZigBee node type is said to require only about 10 % of the software of a typical Bluetooth or Wireless Internet node, while the simplest nodes are about 2 %. However, actual code sizes are much higher, more like 50 % of the Bluetooth code size. ZigBee chip vendors have announced 128-kilobyte devices. In this work undertaken in the design & development of the temperature sensor, it senses the temperature and after amplification is then fed to the micro controller, this is then connected to the zigbee module, which transmits the data and at the other end the zigbee reads the data and displays on to the LCD. The software developed is highly accurate and works at a very high speed. The method developed shows the effectiveness of the scheme employed.
Keywords: Zigbee, Microcontroller, PIC, Transmitter, Receiver, Synchronous, Blue tooth, Communication.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2339517 A Sociolinguistic Study of the Outcomes of Arabic-French Contact in the Algerian Dialect Tlemcen Speech Community as a Case Study
Authors: R. Rahmoun-Mrabet
Abstract:
It is acknowledged that our style of speaking changes according to a wide range of variables such as gender, setting, the age of both the addresser and the addressee, the conversation topic, and the aim of the interaction. These differences in style are noticeable in monolingual and multilingual speech communities. Yet, they are more observable in speech communities where two or more codes coexist. The linguistic situation in Algeria reflects a state of bilingualism because of the coexistence of Arabic and French. Nevertheless, like all Arab countries, it is characterized by diglossia i.e. the concomitance of Modern Standard Arabic (MSA) and Algerian Arabic (AA), the former standing for the ‘high variety’ and the latter for the ‘low variety’. The two varieties are derived from the same source but are used to fulfil distinct functions that is, MSA is used in the domains of religion, literature, education and formal settings. AA, on the other hand, is used in informal settings, in everyday speech. French has strongly affected the Algerian language and culture because of the historical background of Algeria, thus, what can easily be noticed in Algeria is that everyday speech is characterized by code-switching from dialectal Arabic and French or by the use of borrowings. Tamazight is also very present in many regions of Algeria and is the mother tongue of many Algerians. Yet, it is not used in the west of Algeria, where the study has been conducted. The present work, which was directed in the speech community of Tlemcen-Algeria, aims at depicting some of the outcomes of the contact of Arabic with French such as code-switching, borrowing and interference. The question that has been asked is whether Algerians are aware of their use of borrowings or not. Three steps are followed in this research; the first one is to depict the sociolinguistic situation in Algeria and to describe the linguistic characteristics of the dialect of Tlemcen, which are specific to this city. The second one is concerned with data collection. Data have been collected from 57 informants who were given questionnaires and who have then been classified according to their age, gender and level of education. Information has also been collected through observation, and note taking. The third step is devoted to analysis. The results obtained reveal that most Algerians are aware of their use of borrowings. The present work clarifies how words are borrowed from French, and then adapted to Arabic. It also illustrates the way in which singular words inflect into plural. The results expose the main characteristics of borrowing as opposed to code-switching. The study also clarifies how interference occurs at the level of nouns, verbs and adjectives.
Keywords: Bilingualism, borrowing, code-switching, interference, language contact.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 949516 A New Approach for Prioritization of Failure Modes in Design FMEA using ANOVA
Authors: Sellappan Narayanagounder, Karuppusami Gurusami
Abstract:
The traditional Failure Mode and Effects Analysis (FMEA) uses Risk Priority Number (RPN) to evaluate the risk level of a component or process. The RPN index is determined by calculating the product of severity, occurrence and detection indexes. The most critically debated disadvantage of this approach is that various sets of these three indexes may produce an identical value of RPN. This research paper seeks to address the drawbacks in traditional FMEA and to propose a new approach to overcome these shortcomings. The Risk Priority Code (RPC) is used to prioritize failure modes, when two or more failure modes have the same RPN. A new method is proposed to prioritize failure modes, when there is a disagreement in ranking scale for severity, occurrence and detection. An Analysis of Variance (ANOVA) is used to compare means of RPN values. SPSS (Statistical Package for the Social Sciences) statistical analysis package is used to analyze the data. The results presented are based on two case studies. It is found that the proposed new methodology/approach resolves the limitations of traditional FMEA approach.Keywords: Failure mode and effects analysis, Risk priority code, Critical failure mode, Analysis of variance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5437515 The Design of Multiple Detection Parallel Combined Spread Spectrum Communication System
Authors: Lixin Tian, Wei Xue
Abstract:
Many jobs in society go underground, such as mine mining, tunnel construction and subways, which are vital to the development of society. Once accidents occur in these places, the interruption of traditional wired communication is not conducive to the development of rescue work. In order to realize the positioning, early warning and command functions of underground personnel and improve rescue efficiency, it is necessary to develop and design an emergency ground communication system. It is easy to be subjected to narrowband interference when performing conventional underground communication. Spreading communication can be used for this problem. However, general spread spectrum methods such as direct spread communication are inefficient, so it is proposed to use parallel combined spread spectrum (PCSS) communication to improve efficiency. The PCSS communication not only has the anti-interference ability and the good concealment of the traditional spread spectrum system, but also has a relatively high frequency band utilization rate and a strong information transmission capability. So, this technology has been widely used in practice. This paper presents a PCSS communication model-multiple detection parallel combined spread spectrum (MDPCSS) communication system. In this paper, the principle of MDPCSS communication system is described, that is, the sequence at the transmitting end is processed in blocks and cyclically shifted to facilitate multiple detection at the receiving end. The block diagrams of the transmitter and receiver of the MDPCSS communication system are introduced. At the same time, the calculation formula of the system bit error rate (BER) is introduced, and the simulation and analysis of the BER of the system are completed. By comparing with the common parallel PCSS communication, we can draw a conclusion that it is indeed possible to reduce the BER and improve the system performance. Furthermore, the influence of different pseudo-code lengths selected on the system BER is simulated and analyzed, and the conclusion is that the larger the pseudo-code length is, the smaller the system error rate is.
Keywords: Cyclic shift, multiple detection, parallel combined spread spectrum, PN code.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 552514 Bandwidth Efficient Diversity Scheme Using STTC Concatenated With STBC: MIMO Systems
Authors: Sameru Sharma, Sanjay Sharma, Derick Engles
Abstract:
Multiple-input multiple-output (MIMO) systems are widely in use to improve quality, reliability of wireless transmission and increase the spectral efficiency. However in MIMO systems, multiple copies of data are received after experiencing various channel effects. The limitations on account of complexity due to number of antennas in case of conventional decoding techniques have been looked into. Accordingly we propose a modified sphere decoder (MSD-1) algorithm with lower complexity and give rise to system with high spectral efficiency. With the aim to increase signal diversity we apply rotated quadrature amplitude modulation (QAM) constellation in multi dimensional space. Finally, we propose a new architecture involving space time trellis code (STTC) concatenated with space time block code (STBC) using MSD-1 at the receiver for improving system performance. The system gains have been verified with channel state information (CSI) errors.Keywords: Channel State Information , Diversity, Multi-Antenna, Rotated Constellation, Space Time Codes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1665513 A Mesh Free Moving Node Method To Analyze Flow Through Spirals of Orbiting Scroll Pump
Authors: I.Banerjee, A.K.Mahendra, T.K.Bera, B.G.Chandresh
Abstract:
The scroll pump belongs to the category of positive displacement pump can be used for continuous pumping of gases at low pressure apart from general vacuum application. The shape of volume occupied by the gas moves and deforms continuously as the spiral orbits. To capture flow features in such domain where mesh deformation varies with time in a complicated manner, mesh less solver was found to be very useful. Least Squares Kinetic Upwind Method (LSKUM) is a kinetic theory based mesh free Euler solver working on arbitrary distribution of points. Here upwind is enforced in molecular level based on kinetic flux vector splitting scheme (KFVS). In the present study we extended the LSKUM to moving node viscous flow application. This new code LSKUM-NS-MN for moving node viscous flow is validated for standard airfoil pitching test case. Simulation performed for flow through scroll pump using LSKUM-NS-MN code agrees well with the experimental pumping speed data.Keywords: Least Squares, Moving node, Pitching, Spirals.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1903512 A Study of Mode Choice Model Improvement Considering Age Grouping
Authors: Young-Hyun Seo, Hyunwoo Park, Dong-Kyu Kim, Seung-Young Kho
Abstract:
The purpose of this study is providing an improved mode choice model considering parameters including age grouping of prime-aged and old age. In this study, 2010 Household Travel Survey data were used and improper samples were removed through the analysis. Chosen alternative, date of birth, mode, origin code, destination code, departure time, and arrival time are considered from Household Travel Survey. By preprocessing data, travel time, travel cost, mode, and ratio of people aged 45 to 55 years, 55 to 65 years and over 65 years were calculated. After the manipulation, the mode choice model was constructed using LIMDEP by maximum likelihood estimation. A significance test was conducted for nine parameters, three age groups for three modes. Then the test was conducted again for the mode choice model with significant parameters, travel cost variable and travel time variable. As a result of the model estimation, as the age increases, the preference for the car decreases and the preference for the bus increases. This study is meaningful in that the individual and households characteristics are applied to the aggregate model.
Keywords: Age grouping, aging, mode choice model, multinomial logit model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1614511 Crossover Memories and Code-Switching in the Narratives of Arabic-Hebrew and Hebrew-English Bilingual Adults in Israel
Authors: Amani Jaber-Awida
Abstract:
This study examines two bilingual phenomena in the narratives of Arabic Hebrew and Hebrew-English bilingual adults in Israel: CO memories and code-switching (CS). The study examined these phenomena in the context of autobiographical memory, using a cue word technique. Student experimenters held two sessions in the homes of the participants. In separate language sessions, the participant was asked to look first at each of 16 cue words and then to state a concrete memory. After stating the memory, participants reported whether their memories were in the same language of the experiment session or different. Memories were classified as ‘Crossovers’ (CO) or ‘Same Language’ (SL) according to participants' self-reports. Participants were also required to elaborate about the setting, interlocutors and other languages involved in the specific memory. Beyond replicating the procedure of cuing technique, one memory from a specific lifespan period was chosen per participant, and the participant was required to provide further details about it. For the more detailed memories, CS count was conducted. Both bilingual groups confirmed the Reminiscence Bump phenomenon, retrieving more memories in the 10-30 age period. CO memories prevailed in second language sessions (L2). Same language memories were more abundant in first language sessions (L1). Higher CS frequency was found in L2 sessions. Finally, as predicted, 'individual' CS was prevalent in L2 sessions, but 'community-based' CS was not higher in L1 sessions. The two bilingual measures in this study, crossovers, and CS came from different research traditions, the former from an experimental paradigm in the psychology of autobiographical memory based on self-reported judgments, the latter a behavioral measure from linguistics. This merger of approaches offers new insight into the field of bilingual autobiographical memory. In addition, the study attempted to shed light on the investigation of motivations for CS, beginning with Walters’ SPPL Model and concluding with a distinction between ‘community-based’ and individual motivations.
Keywords: Autobiographical memory, code-switching, crossover memories, reminiscence bump.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 785510 Investigation of Building Loads Effect on the Stability of Slope
Authors: Hadj Brahim Mounia, Belhamel Farid, Souici Messoud
Abstract:
In big cities, construction on sloping land (landslide) is becoming increasingly prevalent due to the unavailability of flat lands. This has created a major challenge for structural engineers with regard to structure design, due to the difficulties encountered during the implementation of projects, both for the structure and the soil. This paper analyses the effect of the number of floors of a building, founded on isolated footing on the stability of the slope using the computer code finite element PLAXIS 2D v. 8.2. The isolated footings of a building in this case were anchored in soil so that the levels of successive isolated footing realize a maximum slope of base of three for two heights, which connects the edges of the nearest footings, according to the Algerian building code DTR-BC 2.331: Shallow foundations. The results show that the embedment of the foundation into the soil reduces the value of the safety factor due to the change of the stress state of the soil by these foundations. The number of floors a building has also influences the safety factor. It has been noticed from this case of study that there is no risk of collapse of slopes for an inclination between 5° and 8°. In the case of slope inclination greater than 10° it has been noticed that the urbanization is prohibited.
Keywords: Building, collapse, factor of safety, isolated footing, PLAXIS 2D, slope.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1610509 Challenges Facing Housing Developers to Deliver Zero Carbon Homes in England
Authors: M. Osmani, A. O'Reilly
Abstract:
Housebuilders in England have been the target of numerous government policies in recent years promoting increased productivity and affordability. As a result, the housebuilding industry is currently faced with objectives to improve the affordability and sustainability of new homes whilst also increasing production rates to 240,000 per year by 2016.Yet amidst a faltering economic climate, the UK Government is forging ahead with the 'Code for Sustainable Homes', which includes stringent sustainable standards for all new homes and sets ambitious targets for the housebuilding industry, the culmination of which is the production of zero carbon homes by 2016.Great uncertainty exists amongst housebuilders as to the costs, benefits and risks of building zero carbon homes. This paper examines the key barriers to zero carbon homes from housebuilders- perspective. A comprehensive opinion on the challenges to deliver zero carbon homes is gathered through a questionnaire survey issued to the major housing developers in England. The study found that a number of cultural, legislative, and financial barriers stand in the way of the widespread construction of zero carbon homes. The study concludes with several recommendations to both the Government and the housebuilding industry to address the barriers that hinder a successful delivery of zero carbon homes in England.
Keywords: Zero carbon homes, Code for Sustainable Homes, housebuilders, England
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2085508 Result Validation Analysis of Steel Testing Machines
Authors: Wasiu O. Ajagbe, Habeeb O. Hamzat, Waris A. Adebisi
Abstract:
Structural failures occur due to a number of reasons. These may include under design, poor workmanship, substandard materials, misleading laboratory tests and lots more. Reinforcing steel bar is an important construction material, hence its properties must be accurately known before being utilized in construction. Understanding this property involves carrying out mechanical tests prior to design and during construction to ascertain correlation using steel testing machine which is usually not readily available due to the location of project. This study was conducted to determine the reliability of reinforcing steel testing machines. Reconnaissance survey was conducted to identify laboratories where yield and ultimate tensile strengths tests can be carried out. Six laboratories were identified within Ibadan and environs. However, only four were functional at the time of the study. Three steel samples were tested for yield and tensile strengths, using a steel testing machine, at each of the four laboratories (LM, LO, LP and LS). The yield and tensile strength results obtained from the laboratories were compared with the manufacturer’s specification using a reliability analysis programme. Structured questionnaire was administered to the operators in each laboratory to consider their impact on the test results. The average value of manufacturers’ tensile strength and yield strength are 673.7 N/mm2 and 559.7 N/mm2 respectively. The tensile strength obtained from the four laboratories LM, LO, LP and LS are given as 579.4, 652.7, 646.0 and 649.9 N/mm2 respectively while their yield strengths respectively are 453.3, 597.0, 550.7 and 564.7 N/mm2. Minimum tensile to yield strength ratio is 1.08 for BS 4449: 2005 and 1.15 for ASTM A615. Tensile to yield strength ratio from the four laboratories are 1.28, 1.09, 1.17 and 1.15 for LM, LO, LP and LS respectively. The tensile to yield strength ratio shows that the result obtained from all the laboratories meet the code requirements used for the test. The result of the reliability test shows varying level of reliability between the manufacturers’ specification and the result obtained from the laboratories. Three of the laboratories; LO, LS and LP have high value of reliability with the manufacturer i.e. 0.798, 0.866 and 0.712 respectively. The fourth laboratory, LM has a reliability value of 0.100. Steel test should be carried out in a laboratory using the same code in which the structural design was carried out. More emphasis should be laid on the importance of code provisions.
Keywords: Reinforcing steel bars, reliability analysis, tensile strength, universal testing machine, yield strength.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 749507 An Improved Method on Static Binary Analysis to Enhance the Context-Sensitive CFI
Authors: Qintao Shen, Lei Luo, Jun Ma, Jie Yu, Qingbo Wu, Yongqi Ma, Zhengji Liu
Abstract:
Control Flow Integrity (CFI) is one of the most promising technique to defend Code-Reuse Attacks (CRAs). Traditional CFI Systems and recent Context-Sensitive CFI use coarse control flow graphs (CFGs) to analyze whether the control flow hijack occurs, left vast space for attackers at indirect call-sites. Coarse CFGs make it difficult to decide which target to execute at indirect control-flow transfers, and weaken the existing CFI systems actually. It is an unsolved problem to extract CFGs precisely and perfectly from binaries now. In this paper, we present an algorithm to get a more precise CFG from binaries. Parameters are analyzed at indirect call-sites and functions firstly. By comparing counts of parameters prepared before call-sites and consumed by functions, targets of indirect calls are reduced. Then the control flow would be more constrained at indirect call-sites in runtime. Combined with CCFI, we implement our policy. Experimental results on some popular programs show that our approach is efficient. Further analysis show that it can mitigate COOP and other advanced attacks.Keywords: Contex-sensitive, CFI, binary analysis, code reuse attack.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 943506 A Coupled Model for Two-Phase Simulation of a Heavy Water Pressure Vessel Reactor
Authors: Damian Ramajo, Santiago Corzo, Norberto Nigro
Abstract:
A Multi-dimensional computational fluid dynamics (CFD) two-phase model was developed with the aim to simulate the in-core coolant circuit of a pressurized heavy water reactor (PHWR) of a commercial nuclear power plant (NPP). Due to the fact that this PHWR is a Reactor Pressure Vessel type (RPV), three-dimensional (3D) detailed modelling of the large reservoirs of the RPV (the upper and lower plenums and the downcomer) were coupled with an in-house finite volume one-dimensional (1D) code in order to model the 451 coolant channels housing the nuclear fuel. Regarding the 1D code, suitable empirical correlations for taking into account the in-channel distributed (friction losses) and concentrated (spacer grids, inlet and outlet throttles) pressure losses were used. A local power distribution at each one of the coolant channels was also taken into account. The heat transfer between the coolant and the surrounding moderator was accurately calculated using a two-dimensional theoretical model. The implementation of subcooled boiling and condensation models in the 1D code along with the use of functions for representing the thermal and dynamic properties of the coolant and moderator (heavy water) allow to have estimations of the in-core steam generation under nominal flow conditions for a generic fission power distribution. The in-core mass flow distribution results for steady state nominal conditions are in agreement with the expected from design, thus getting a first assessment of the coupled 1/3D model. Results for nominal condition were compared with those obtained with a previous 1/3D single-phase model getting more realistic temperature patterns, also allowing visualize low values of void fraction inside the upper plenum. It must be mentioned that the current results were obtained by imposing prescribed fission power functions from literature. Therefore, results are showed with the aim of point out the potentiality of the developed model.Keywords: CFD, PHWR, Thermo-hydraulic, Two-phase flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2710505 Software Tools for System Identification and Control using Neural Networks in Process Engineering
Authors: J. Fernandez de Canete, S. Gonzalez-Perez, P. del Saz-Orozco
Abstract:
Neural networks offer an alternative approach both for identification and control of nonlinear processes in process engineering. The lack of software tools for the design of controllers based on neural network models is particularly pronounced in this field. SIMULINK is properly a widely used graphical code development environment which allows system-level developers to perform rapid prototyping and testing. Such graphical based programming environment involves block-based code development and offers a more intuitive approach to modeling and control task in a great variety of engineering disciplines. In this paper a SIMULINK based Neural Tool has been developed for analysis and design of multivariable neural based control systems. This tool has been applied to the control of a high purity distillation column including non linear hydrodynamic effects. The proposed control scheme offers an optimal response for both theoretical and practical challenges posed in process control task, in particular when both, the quality improvement of distillation products and the operation efficiency in economical terms are considered.Keywords: Distillation, neural networks, software tools, identification, control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2706504 Image Transmission in Low-Power Networks in Mobile Communications Channel
Authors: M. A. M. El-Bendary, H. Kazimian, A. E. Abo-El-azm, N. A. El-Fishawy, F. El-Samie, F. Shawki
Abstract:
This paper studies a vital issue in wireless communications, which is the transmission of images over Wireless Personal Area Networks (WPANs) through the Bluetooth network. It presents a simple method to improve the efficiency of error control code of old Bluetooth versions over mobile WPANs through Interleaved Error Control Code (IECC) technique. The encoded packets are interleaved by simple block interleaver. Also, the paper presents a chaotic interleaving scheme as a tool against bursts of errors which depends on the chaotic Baker map. Also, the paper proposes using the chaotic interleaver instead of traditional block interleaver with Forward Error Control (FEC) scheme. A comparison study between the proposed and standard techniques for image transmission over a correlated fading channel is presented. Simulation results reveal the superiority of the proposed chaotic interleaving scheme to other schemes. Also, the superiority of FEC with proposed chaotic interleaver to the conventional interleavers with enhancing the security level with chaotic interleaving packetby- packet basis.Keywords: Mobile Bluetooth terminals, WPANs, Jackes' model, Interleaving technique, chaotic interleaver
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1934503 Passive Flow Control in Twin Air-Intakes
Authors: Akshoy R. Paul, Pritanshu Ranjan, Ravi R. Upadhyay, Anuj Jain
Abstract:
Aircraft propulsion systems often use Y-shaped subsonic diffusing ducts as twin air-intakes to supply the ambient air into the engine compressor for thrust generation. Due to space constraint, the diffusers need to be curved, which causes severe flow non-uniformity at the engine face. The present study attempt to control flow in a mild-curved Y-duct diffuser using trapezoidalshaped vortex generators (VG) attached on either both the sidewalls or top and bottom walls of the diffuser at the inflexion plane. A commercial computational fluid dynamics (CFD) code is modified and is used to simulate the effects of SVG in flow of a Y-duct diffuser. A few experiments are conducted for CFD code validation, while the rest are done computationally. The best combination of Yduct diffuser is found with VG-2 arranged in co-rotating sequence and attached to both the sidewalls, which ensures highest static pressure recovery, lowest total pressure loss, minimum flow distortion and less flow separation in Y-duct diffuser. The decrease in VG height while attached to top and bottom walls further improves axial flow uniformity at the diffuser outlet by a great margin as compared to the bare duct.Keywords: Twin air-intake, Vortex generator (VG), Turbulence model, Pressure recovery, Distortion coefficient
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2129502 Reliability Levels of Reinforced Concrete Bridges Obtained by Mixing Approaches
Authors: Adrián D. García-Soto, Alejandro Hernández-Martínez, Jesús G. Valdés-Vázquez, Reyna A. Vizguerra-Alvarez
Abstract:
Reinforced concrete bridges designed by code are intended to achieve target reliability levels adequate for the geographical environment where the code is applicable. Several methods can be used to estimate such reliability levels. Many of them require the establishment of an explicit limit state function (LSF). When such LSF is not available as a close-form expression, the simulation techniques are often employed. The simulation methods are computing intensive and time consuming. Note that if the reliability of real bridges designed by code is of interest, numerical schemes, the finite element method (FEM) or computational mechanics could be required. In these cases, it can be quite difficult (or impossible) to establish a close-form of the LSF, and the simulation techniques may be necessary to compute reliability levels. To overcome the need for a large number of simulations when no explicit LSF is available, the point estimate method (PEM) could be considered as an alternative. It has the advantage that only the probabilistic moments of the random variables are required. However, in the PEM, fitting of the resulting moments of the LSF to a probability density function (PDF) is needed. In the present study, a very simple alternative which allows the assessment of the reliability levels when no explicit LSF is available and without the need of extensive simulations is employed. The alternative includes the use of the PEM, and its applicability is shown by assessing reliability levels of reinforced concrete bridges in Mexico when a numerical scheme is required. Comparisons with results by using the Monte Carlo simulation (MCS) technique are included. To overcome the problem of approximating the probabilistic moments from the PEM to a PDF, a well-known distribution is employed. The approach mixes the PEM and other classic reliability method (first order reliability method, FORM). The results in the present study are in good agreement whit those computed with the MCS. Therefore, the alternative of mixing the reliability methods is a very valuable option to determine reliability levels when no close form of the LSF is available, or if numerical schemes, the FEM or computational mechanics are employed.
Keywords: Structural reliability, reinforced concrete bridges, mixing approaches, point estimate method, Monte Carlo simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1410501 Survey Based Data Security Evaluation in Pakistan Financial Institutions against Malicious Attacks
Authors: Naveed Ghani, Samreen Javed
Abstract:
In today’s heterogeneous network environment, there is a growing demand for distrust clients to jointly execute secure network to prevent from malicious attacks as the defining task of propagating malicious code is to locate new targets to attack. Residual risk is always there no matter what solutions are implemented or whet so ever security methodology or standards being adapted. Security is the first and crucial phase in the field of Computer Science. The main aim of the Computer Security is gathering of information with secure network. No one need wonder what all that malware is trying to do: It's trying to steal money through data theft, bank transfers, stolen passwords, or swiped identities. From there, with the help of our survey we learn about the importance of white listing, antimalware programs, security patches, log files, honey pots, and more used in banks for financial data protection but there’s also a need of implementing the IPV6 tunneling with Crypto data transformation according to the requirements of new technology to prevent the organization from new Malware attacks and crafting of its own messages and sending them to the target. In this paper the writer has given the idea of implementing IPV6 Tunneling Secessions on private data transmission from financial organizations whose secrecy needed to be safeguarded.
Keywords: Network worms, malware infection propagating malicious code, virus, security, VPN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2811