Search results for: Transition probability
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 949

Search results for: Transition probability

679 Microscopic Analysis of Interfacial Transition Zone of Cementitious Composites Prepared by Various Mixing Procedures

Authors: Josef Fládr, Jiří Němeček, Veronika Koudelková, Petr Bílý

Abstract:

Mechanical parameters of cementitious composites differ quite significantly based on the composition of cement matrix. They are also influenced by mixing times and procedure. The research presented in this paper was aimed at identification of differences in microstructure of normal strength (NSC) and differently mixed high strength (HSC) cementitious composites. Scanning electron microscopy (SEM) investigation together with energy dispersive X-ray spectroscopy (EDX) phase analysis of NSC and HSC samples was conducted. Evaluation of interfacial transition zone (ITZ) between the aggregate and cement matrix was performed. Volume share, thickness, porosity and composition of ITZ were studied. In case of HSC, samples obtained by several different mixing procedures were compared in order to find the most suitable procedure. In case of NSC, ITZ was identified around 40-50% of aggregate grains and its thickness typically ranged between 10 and 40 µm. Higher porosity and lower share of clinker was observed in this area as a result of increased water-to-cement ratio (w/c) and the lack of fine particles improving the grading curve of the aggregate. Typical ITZ with lower content of Ca was observed only in one HSC sample, where it was developed around less than 15% of aggregate grains. The typical thickness of ITZ in this sample was similar to ITZ in NSC (between 5 and 40 µm). In the remaining four HSC samples, no ITZ was observed. In general, the share of ITZ in HSC samples was found to be significantly smaller than in NSC samples. As ITZ is the weakest part of the material, this result explains to large extent the improved mechanical properties of HSC compared to NSC. Based on the comparison of characteristics of ITZ in HSC samples prepared by different mixing procedures, the most suitable mixing procedure from the point of view of properties of ITZ was identified.

Keywords: Energy dispersive X-ray spectroscopy, high strength concrete, interfacial transition zone, mixing procedure, normal strength concrete, scanning electron microscopy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1273
678 Further Investigations on Higher Mathematics Scores for Chinese University Students

Authors: Xun Ge

Abstract:

Recently, X. Ge and J. Qian investigated some relations between higher mathematics scores and calculus scores (resp. linear algebra scores, probability statistics scores) for Chinese university students. Based on rough-set theory, they established an information system S = (U,CuD,V, f). In this information system, higher mathematics score was taken as a decision attribute and calculus score, linear algebra score, probability statistics score were taken as condition attributes. They investigated importance of each condition attribute with respective to decision attribute and strength of each condition attribute supporting decision attribute. In this paper, we give further investigations for this issue. Based on the above information system S = (U, CU D, V, f), we analyze the decision rules between condition and decision granules. For each x E U, we obtain support (resp. strength, certainty factor, coverage factor) of the decision rule C —>x D, where C —>x D is the decision rule induced by x in S = (U, CU D, V, f). Results of this paper gives new analysis of on higher mathematics scores for Chinese university students, which can further lead Chinese university students to raise higher mathematics scores in Chinese graduate student entrance examination.

Keywords: Rough set, support, strength, certainty factor, coverage factor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1368
677 Vector Space of the Extended Base-triplets over the Galois Field of five DNA Bases Alphabet

Authors: Robersy Sánchez, Ricardo Grau

Abstract:

A plausible architecture of an ancient genetic code is derived from an extended base triplet vector space over the Galois field of the extended base alphabet {D, G, A, U, C}, where the letter D represent one or more hypothetical bases with unspecific pairing. We hypothesized that the high degeneration of a primeval genetic code with five bases and the gradual origin and improvements of a primitive DNA repair system could make possible the transition from the ancient to the modern genetic code. Our results suggest that the Watson-Crick base pairing and the non-specific base pairing of the hypothetical ancestral base D used to define the sum and product operations are enough features to determine the coding constraints of the primeval and the modern genetic code, as well as the transition from the former to the later. Geometrical and algebraic properties of this vector space reveal that the present codon assignment of the standard genetic code could be induced from a primeval codon assignment. Besides, the Fourier spectrum of the extended DNA genome sequences derived from the multiple sequence alignment suggests that the called period-3 property of the present coding DNA sequences could also exist in the ancient coding DNA sequences.

Keywords: Genetic code vector space, primeval genetic code, power spectrum.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2362
676 Performance Evaluation of a Prioritized, Limited Multi-Server Processor-Sharing System That Includes Servers with Various Capacities

Authors: Yoshiaki Shikata, Nobutane Hanayama

Abstract:

We present a prioritized, limited multi-server processor sharing (PS) system where each server has various capacities, and N (≥2) priority classes are allowed in each PS server. In each prioritized, limited server, different service ratio is assigned to each class request, and the number of requests to be processed is limited to less than a certain number. Routing strategies of such prioritized, limited multi-server PS systems that take into account the capacity of each server are also presented, and a performance evaluation procedure for these strategies is discussed. Practical performance measures of these strategies, such as loss probability, mean waiting time, and mean sojourn time, are evaluated via simulation. In the PS server, at the arrival (or departure) of a request, the extension (shortening) of the remaining sojourn time of each request receiving service can be calculated by using the number of requests of each class and the priority ratio. Utilising a simulation program which executes these events and calculations, the performance of the proposed prioritized, limited multi-server PS rule can be analyzed. From the evaluation results, most suitable routing strategy for the loss or waiting system is clarified.

Keywords: Processor sharing, multi-server, various capacity, N priority classes, routing strategy, loss probability, mean sojourn time, mean waiting time, simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1034
675 Inflation and Unemployment Rates as Indicators of the Transition European Union Countries Monetary Policy Orientation

Authors: Elza Jurun, Damir Piplica, Tea Poklepović

Abstract:

Numerous studies carried out in the developed  western democratic countries have shown that the ideological  framework of the governing party has a significant influence on the  monetary policy. The executive authority consisting of a left-wing  party gives a higher weight to unemployment suppression and central  bank implements a more expansionary monetary policy. On the other  hand, right-wing governing party considers the monetary stability to  be more important than unemployment suppression and in such a  political framework the main macroeconomic objective becomes the  inflation rate reduction. The political framework conditions in the  transition countries which are new European Union (EU) members  are still highly specific in relation to the other EU member countries.  In the focus of this paper is the question whether the same  monetary policy principles are valid in these transitional countries as  well as they apply in developed western democratic EU member  countries. The data base consists of inflation rate and unemployment  rate for 11 transitional EU member countries covering the period  from 2001 to 2012. The essential information for each of these 11  countries and for each year of the observed period is right or left  political orientation of the ruling party.  In this paper we use t-statistics to test our hypothesis that there are  differences in inflation and unemployment between right and left  political orientation of the governing party. To explore the influence  of different countries, through years and different political  orientations descriptive statistics is used. Inflation and unemployment  should be strongly negatively correlated through time, which is tested  using Pearson correlation coefficient.  Regarding the fact whether the governing authority is consisted  from left or right politically oriented parties, monetary authorities  will adjust its policy setting the higher priority on lower inflation or  unemployment reduction. 

Keywords: Inflation rate, monetary policy orientation, transition EU countries, unemployment rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2323
674 Improvising Intrusion Detection for Malware Activities on Dual-Stack Network Environment

Authors: Zulkiflee M., Robiah Y., Nur Azman Abu, Shahrin S.

Abstract:

Malware is software which was invented and meant for doing harms on computers. Malware is becoming a significant threat in computer network nowadays. Malware attack is not just only involving financial lost but it can also cause fatal errors which may cost lives in some cases. As new Internet Protocol version 6 (IPv6) emerged, many people believe this protocol could solve most malware propagation issues due to its broader addressing scheme. As IPv6 is still new compares to native IPv4, some transition mechanisms have been introduced to promote smoother migration. Unfortunately, these transition mechanisms allow some malwares to propagate its attack from IPv4 to IPv6 network environment. In this paper, a proof of concept shall be presented in order to show that some existing IPv4 malware detection technique need to be improvised in order to detect malware attack in dual-stack network more efficiently. A testbed of dual-stack network environment has been deployed and some genuine malware have been released to observe their behaviors. The results between these different scenarios will be analyzed and discussed further in term of their behaviors and propagation methods. The results show that malware behave differently on IPv6 from the IPv4 network protocol on the dual-stack network environment. A new detection technique is called for in order to cater this problem in the near future.

Keywords: Dual-Stack, Malware, Worm, IPv6;IDS

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2003
673 Comparison between Deterministic and Probabilistic Stability Analysis, Featuring Consequent Risk Assessment

Authors: Isabela Moreira Queiroz

Abstract:

Slope stability analyses are largely carried out by deterministic methods and evaluated through a single security factor. Although it is known that the geotechnical parameters can present great dispersal, such analyses are considered fixed and known. The probabilistic methods, in turn, incorporate the variability of input key parameters (random variables), resulting in a range of values of safety factors, thus enabling the determination of the probability of failure, which is an essential parameter in the calculation of the risk (probability multiplied by the consequence of the event). Among the probabilistic methods, there are three frequently used methods in geotechnical society: FOSM (First-Order, Second-Moment), Rosenblueth (Point Estimates) and Monte Carlo. This paper presents a comparison between the results from deterministic and probabilistic analyses (FOSM method, Monte Carlo and Rosenblueth) applied to a hypothetical slope. The end was held to evaluate the behavior of the slope and consequent risk analysis, which is used to calculate the risk and analyze their mitigation and control solutions. It can be observed that the results obtained by the three probabilistic methods were quite close. It should be noticed that the calculation of the risk makes it possible to list the priority to the implementation of mitigation measures. Therefore, it is recommended to do a good assessment of the geological-geotechnical model incorporating the uncertainty in viability, design, construction, operation and closure by means of risk management. 

Keywords: Probabilistic methods, risk assessment, risk management, slope stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1738
672 Performance Evaluation of a Limited Round-Robin System

Authors: Yoshiaki Shikata

Abstract:

Performance of a limited Round-Robin (RR) rule is studied in order to clarify the characteristics of a realistic sharing model of a processor. Under the limited RR rule, the processor allocates to each request a fixed amount of time, called a quantum, in a fixed order. The sum of the requests being allocated these quanta is kept below a fixed value. Arriving requests that cannot be allocated quanta because of such a restriction are queued or rejected. Practical performance measures, such as the relationship between the mean sojourn time, the mean number of requests, or the loss probability and the quantum size are evaluated via simulation. In the evaluation, the requested service time of an arriving request is converted into a quantum number. One of these quanta is included in an RR cycle, which means a series of quanta allocated to each request in a fixed order. The service time of the arriving request can be evaluated using the number of RR cycles required to complete the service, the number of requests receiving service, and the quantum size. Then an increase or decrease in the number of quanta that are necessary before service is completed is reevaluated at the arrival or departure of other requests. Tracking these events and calculations enables us to analyze the performance of our limited RR rule. In particular, we obtain the most suitable quantum size, which minimizes the mean sojourn time, for the case in which the switching time for each quantum is considered.

Keywords: Limited RR rule, quantum, processor sharing, sojourn time, performance measures, simulation, loss probability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1245
671 Spectral Amplitude Coding Optical CDMA: Performance Analysis of PIIN Reduction Using VC Code Family

Authors: Hassan Yousif Ahmed, Ibrahima Faye, N.M.Saad, S.A. Aljined

Abstract:

Multi-user interference (MUI) is the main reason of system deterioration in the Spectral Amplitude Coding Optical Code Division Multiple Access (SAC-OCDMA) system. MUI increases with the number of simultaneous users, resulting into higher probability bit rate and limits the maximum number of simultaneous users. On the other hand, Phase induced intensity noise (PIIN) problem which is originated from spontaneous emission of broad band source from MUI severely limits the system performance should be addressed as well. Since the MUI is caused by the interference of simultaneous users, reducing the MUI value as small as possible is desirable. In this paper, an extensive study for the system performance specified by MUI and PIIN reducing is examined. Vectors Combinatorial (VC) codes families are adopted as a signature sequence for the performance analysis and a comparison with reported codes is performed. The results show that, when the received power increases, the PIIN noise for all the codes increases linearly. The results also show that the effect of PIIN can be minimized by increasing the code weight leads to preserve adequate signal to noise ratio over bit error probability. A comparison study between the proposed code and the existing codes such as Modified frequency hopping (MFH), Modified Quadratic- Congruence (MQC) has been carried out.

Keywords: FBG, MUI, PIIN, SAC-OCDMA, VCC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2209
670 Error Rate Probability for Coded MQAM with MRC Diversity in the Presence of Cochannel Interferers over Nakagami-Fading Channels

Authors: J.S. Ubhi, M.S. Patterh, T.S. Kamal

Abstract:

Exact expressions for bit-error probability (BEP) for coherent square detection of uncoded and coded M-ary quadrature amplitude modulation (MQAM) using an array of antennas with maximal ratio combining (MRC) in a flat fading channel interference limited system in a Nakagami-m fading environment is derived. The analysis assumes an arbitrary number of independent and identically distributed Nakagami interferers. The results for coded MQAM are computed numerically for the case of (24,12) extended Golay code and compared with uncoded MQAM by plotting error probabilities versus average signal-to-interference ratio (SIR) for various values of order of diversity N, number of distinct symbols M, in order to examine the effect of cochannel interferers on the performance of the digital communication system. The diversity gains and net gains are also presented in tabular form in order to examine the performance of digital communication system in the presence of interferers, as the order of diversity increases. The analytical results presented in this paper are expected to provide useful information needed for design and analysis of digital communication systems with space diversity in wireless fading channels.

Keywords: Cochannel interference, maximal ratio combining, Nakagami-m fading, wireless digital communications.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1853
669 A Mobile Multihop Relay Dynamic TDD Scheme for Cellular Networks

Authors: Jong-Moon Chung, Hyung-Weon Cho, Ki-Yong Jin, Min-Hee Cho

Abstract:

In this paper, we present an analytical framework for the evaluation of the uplink performance of multihop cellular networks based on dynamic time division duplex (TDD). New wireless broadband protocols, such as WiMAX, WiBro, and 3G-LTE apply TDD, and mobile communication protocols under standardization (e.g., IEEE802.16j) are investigating mobile multihop relay (MMR) as a future technology. In this paper a novel MMR TDD scheme is presented, where the dynamic range of the frame is shared to traffic resources of asymmetric nature and multihop relaying. The mobile communication channel interference model comprises of inner and co-channel interference (CCI). The performance analysis focuses on the uplink due to the fact that the effects of dynamic resource allocation show significant performance degradation only in the uplink compared to time division multiple access (TDMA) schemes due to CCI [1-3], where the downlink results to be the same or better.The analysis was based on the signal to interference power ratio (SIR) outage probability of dynamic TDD (D-TDD) and TDMA systems,which are the most widespread mobile communication multi-user control techniques. This paper presents the uplink SIR outage probability with multihop results and shows that the dynamic TDD scheme applying MMR can provide a performance improvement compared to single hop applications if executed properly.

Keywords: Co-Channel Interference, Dynamic TDD, MobileMultihop Reply, Cellular Network, Time Division Multiple Access.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2342
668 Pragati Node Popularity (PNP) Approach to Identify Congestion Hot Spots in MPLS

Authors: E. Ramaraj, A. Padmapriya

Abstract:

In large Internet backbones, Service Providers typically have to explicitly manage the traffic flows in order to optimize the use of network resources. This process is often referred to as Traffic Engineering (TE). Common objectives of traffic engineering include balance traffic distribution across the network and avoiding congestion hot spots. Raj P H and SVK Raja designed the Bayesian network approach to identify congestion hors pots in MPLS. In this approach for every node in the network the Conditional Probability Distribution (CPD) is specified. Based on the CPD the congestion hot spots are identified. Then the traffic can be distributed so that no link in the network is either over utilized or under utilized. Although the Bayesian network approach has been implemented in operational networks, it has a number of well known scaling issues. This paper proposes a new approach, which we call the Pragati (means Progress) Node Popularity (PNP) approach to identify the congestion hot spots with the network topology alone. In the new Pragati Node Popularity approach, IP routing runs natively over the physical topology rather than depending on the CPD of each node as in Bayesian network. We first illustrate our approach with a simple network, then present a formal analysis of the Pragati Node Popularity approach. Our PNP approach shows that for any given network of Bayesian approach, it exactly identifies the same result with minimum efforts. We further extend the result to a more generic one: for any network topology and even though the network is loopy. A theoretical insight of our result is that the optimal routing is always shortest path routing with respect to some considerations of hot spots in the networks.

Keywords: Conditional Probability Distribution, Congestion hotspots, Operational Networks, Traffic Engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1985
667 Workstation Design Based On Ergonomics in Animal Feed Packing Process

Authors: Pirutchada Musigapong, Wantanee Phanprasit

Abstract:

The intention of this study to design the probability optimized sewing sack-s workstation based on ergonomics for productivity improvement and decreasing musculoskeletal disorders. The physical dimensions of two workers were using to design the new workstation. The physical dimensions are (1) sitting height, (2) mid shoulder height sitting, (3) shoulder breadth, (4) knee height, (5) popliteal height, (6) hip breadth and (7) buttock-knee length. The 5th percentile of buttock knee length sitting (51 cm), the 50th percentile of mid shoulder height sitting (62 cm) and the 95th percentile of popliteal height (43 cm) and hip breadth (45 cm) applied to design the workstation for sewing sack-s operator and the others used to adjust the components of this workstation. The risk assessment by RULA before and after using the probability optimized workstation were 7 and 7 scores and REBA scores were 11 and 5, respectively. Body discomfort-abnormal index was used to assess muscle fatigue of operators before adjustment workstation found that neck muscles, arm muscles area, muscles on the back and the lower back muscles fatigue. Therefore, the extension and flexion exercise was applied to relief musculoskeletal stresses. The workers exercised 15 minutes before the beginning and the end of work for 5 days. After that, the capability of flexion and extension muscles- workers were increasing in 3 muscles (arm, leg, and back muscles).

Keywords: Animal feed, anthropometry, ergonomics, sewing sack, workstation design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2429
666 A Hybrid Expert System for Generating Stock Trading Signals

Authors: Hosein Hamisheh Bahar, Mohammad Hossein Fazel Zarandi, Akbar Esfahanipour

Abstract:

In this paper, a hybrid expert system is developed by using fuzzy genetic network programming with reinforcement learning (GNP-RL). In this system, the frame-based structure of the system uses the trading rules extracted by GNP. These rules are extracted by using technical indices of the stock prices in the training time period. For developing this system, we applied fuzzy node transition and decision making in both processing and judgment nodes of GNP-RL. Consequently, using these method not only did increase the accuracy of node transition and decision making in GNP's nodes, but also extended the GNP's binary signals to ternary trading signals. In the other words, in our proposed Fuzzy GNP-RL model, a No Trade signal is added to conventional Buy or Sell signals. Finally, the obtained rules are used in a frame-based system implemented in Kappa-PC software. This developed trading system has been used to generate trading signals for ten companies listed in Tehran Stock Exchange (TSE). The simulation results in the testing time period shows that the developed system has more favorable performance in comparison with the Buy and Hold strategy.

Keywords: Fuzzy genetic network programming, hybrid expert system, technical trading signal, Tehran stock exchange.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1858
665 Thermo-Mechanical Characterization of MWCNTs-Modified Epoxy Resin

Authors: M. Dehghan, R. Al-Mahaidi, I. Sbarski

Abstract:

An industrial epoxy adhesive used in Carbon Fiber Reinforced Polymer (CFRP) strengthening systems was modified by dispersing multi-walled carbon nanotubes (MWCNTs). Nanocomposites were fabricated using the solvent-assisted dispersion method and ultrasonic mixing. Thermogravimetric analysis (TGA), dynamic mechanical analysis (DMA) and tensile tests were conducted to study the effect of nanotubes dispersion on the thermal and mechanical properties of the epoxy composite. Experimental results showed a substantial enhancement in the decomposition temperature and tensile properties of epoxy composite, while, the glass transition temperature (Tg) was slightly reduced due to the solvent effect. The morphology of the epoxy nanocomposites was investigated by SEM. It was proved that using solvent improves the nanotubes dispersion. However, at contents higher than 2 wt. %, nanotubes started to re-bundle in the epoxy matrix which negatively affected the final properties of epoxy composite.

Keywords: Carbon Fiber Reinforced Polymer, Epoxy, Multi-Walled Carbon Nanotube, Glass Transition Temperature.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3354
664 A Study on Mechanical Properties of Fiberboard Made of Durian Rind through Latex with Phenolic Resin as Binding Agent

Authors: W. Wiyaratn, A. Watanapa

Abstract:

This study was aimed to study the probability about the production of fiberboard made of durian rind through latex with phenolic resin as binding agent. The durian rind underwent the boiling process with NaOH [7], [8] and then the fiber from durian rind was formed into fiberboard through heat press. This means that durian rind could be used as replacement for plywood in plywood industry by using durian fiber as composite material with adhesive substance. This research would study the probability about the production of fiberboard made of durian rind through latex with phenolic resin as binding agent. At first, durian rind was split, exposed to light, boiled and steamed in order to gain durian fiber. Then, fiberboard was tested with the density of 600 Kg/m3 and 800 Kg/m3. in order to find a suitable ratio of durian fiber and latex. Afterwards, mechanical properties were tested according to the standards of ASTM and JIS A5905-1994. After the suitable ratio was known, the test results would be compared with medium density fiberboard (MDF) and other related research studies. According to the results, fiberboard made of durian rind through latex with phenolic resin at the density of 800 Kg/m3 at ratio of 1:1, the moisture was measured to be 5.05% with specific gravity (ASTM D 2395-07a) of 0.81, density (JIS A 5905-1994) of 0.88 g/m3, tensile strength, hardness (ASTM D2240), flexibility or elongation at break yielded similar values as the ones by medium density fiberboard (MDF).

Keywords: Durian rind, latex, phenolic resin, medium density fiberboard

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3930
663 Is HR in a State of Transition? An International Comparative Study on the Development of HR Competencies

Authors: Barbara Covarrubias Venegas, Sabine Groblschegg, Bernhard Klaus, Julia Domnanovich

Abstract:

Research Objectives: The roles and activities of Human Resource Management (HRM) have changed a lot in the past years. Driven by a changing environment and therefore new business requirements, the scope of human resource (HR) activities has widened. The extent to which these activities should focus on strategic issues to support the long term success of a company has been discussed in science for many years. As many economies of Central and Eastern Europe (CEE) experienced a phase of transition after the socialist era and are now recovering from the 2008 global crisis it is needed to examine the current state of HR positioning. Furthermore a trend in HR work developing from rather administrative units to being strategic partners of management can be noticed. This leads to the question of better understanding the underlying competencies which are necessary to support organisations. This topic was addressed by the international study “HR Competencies in international comparison”. The quantitative survey was conducted by the Institute for Human Resources & Organisation of FHWien University of Applied Science of WKW (A) in cooperation with partner universities in the countries Bosnia- Herzegovina, Croatia, Serbia and Slovenia. Methodology: Using the questionnaire developed by Dave Ulrich we tested whether the HR Competency model can be used for Austria, Bosnia and Herzegovina, Croatia, Serbia and Slovenia. After performing confirmatory and exploratory factor analysis for the whole data set containing all five countries we could clearly distinguish between four competencies. In a further step our analysis focused on median and average comparisons between the HR competency dimensions. Conclusion: Our literature review, in alignment with other studies, shows a relatively rapid pace of development of HR Roles and HR Competencies in BCSS in the past decades. Comparing data from BCSS and Austria we still can notice that regards strategic orientation there is a lack in BCSS countries, thus competencies are not as developed as in Austria. This leads us to the tentative conclusion that HR has undergone a rapid change but is still in a State of Transition from being a rather administrative unit to performing the role of a strategic partner.

Keywords: Comparative study, HR competencies, HRM, HR Roles.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2161
662 Advanced Numerical and Analytical Methods for Assessing Concrete Sewers and Their Remaining Service Life

Authors: Amir Alani, Mojtaba Mahmoodian, Anna Romanova, Asaad Faramarzi

Abstract:

Pipelines are extensively used engineering structures which convey fluid from one place to another. Most of the time, pipelines are placed underground and are encumbered by soil weight and traffic loads. Corrosion of pipe material is the most common form of pipeline deterioration and should be considered in both the strength and serviceability analysis of pipes. The study in this research focuses on concrete pipes in sewage systems (concrete sewers). This research firstly investigates how to involve the effect of corrosion as a time dependent process of deterioration in the structural and failure analysis of this type of pipe. Then three probabilistic time dependent reliability analysis methods including the first passage probability theory, the gamma distributed degradation model and the Monte Carlo simulation technique are discussed and developed. Sensitivity analysis indexes which can be used to identify the most important parameters that affect pipe failure are also discussed. The reliability analysis methods developed in this paper contribute as rational tools for decision makers with regard to the strengthening and rehabilitation of existing pipelines. The results can be used to obtain a cost-effective strategy for the management of the sewer system.

Keywords: Reliability analysis, service life prediction, Monte Carlo simulation method, first passage probability theory, gamma distributed degradation model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1263
661 Managing Iterations in Product Design and Development

Authors: K. Aravindhan, Trishit Bandyopadhyay, Mahesh Mehendale, Supriya Kumar De

Abstract:

The inherent iterative nature of product design and development poses significant challenge to reduce the product design and development time (PD). In order to shorten the time to market, organizations have adopted concurrent development where multiple specialized tasks and design activities are carried out in parallel. Iterative nature of work coupled with the overlap of activities can result in unpredictable time to completion and significant rework. Many of the products have missed the time to market window due to unanticipated or rather unplanned iteration and rework. The iterative and often overlapped processes introduce greater amounts of ambiguity in design and development, where the traditional methods and tools of project management provide less value. In this context, identifying critical metrics to understand the iteration probability is an open research area where significant contribution can be made given that iteration has been the key driver of cost and schedule risk in PD projects. Two important questions that the proposed study attempts to address are: Can we predict and identify the number of iterations in a product development flow? Can we provide managerial insights for a better control over iteration? The proposal introduces the concept of decision points and using this concept intends to develop metrics that can provide managerial insights into iteration predictability. By characterizing the product development flow as a network of decision points, the proposed research intends to delve further into iteration probability and attempts to provide more clarity.

Keywords: Decision Points, Iteration, Product Design, Rework.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2191
660 Experimental Investigation of Surface Roughness Effect on Single Phase Fluid Flow and Heat Transfer in Micro-Tube

Authors: Mesbah. M. Salem, Mohamed. H. Elhsnawi, Saleh B. Mohamed

Abstract:

An experimental investigation was conducted to study the effect of surface roughness on friction factor and heat transfer characteristics in single-phase fluid flow in a stainless steel micro-tube having diameter of 0.85 mm and average internal surface roughness of 1.7 μm with relative surface roughness of 0.002. Distilled water and R134a liquids were used as the working fluids and testing was conducted with Reynolds numbers ranging from 100 to 10,000 covering laminar, transition and turbulent flow conditions. The experiments were conducted with the micro-tube oriented horizontally with uniform heat fluxes applied at the test section. The results indicated that the friction factor of both water and R134a can be predicted by the Hagen-Poiseuille equation for laminar flow and the modified Miller correlation for turbulent flow and early transition from laminar to turbulent flows. The heat transfer results of water and R134a were in good agreement with the conventional theory in the laminar flow region and lower than the Adam’s correlation for turbulent flow region which deviates from conventional theory.

Keywords: Pressure drop, heat transfer, distilled water, R134a, micro-tube, laminar and turbulent flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3854
659 Evaluating Probable Bending of Frames for Near-Field and Far-Field Records

Authors: Majid Saaly, Shahriar Tavousi Tafreshi, Mehdi Nazari Afshar

Abstract:

Most reinforced concrete structures are designed only under heavy loads have large transverse reinforcement spacing values, and therefore suffer severe failure after intense ground movements. The main goal of this paper is to compare the shear- and axial failure of concrete bending frames available in Tehran using Incremental Dynamic Analysis (IDA) under near- and far-field records. For this purpose, IDA of 5, 10, and 15-story concrete structures were done under seven far-fault records and five near-faults records. The results show that in two-dimensional models of short-rise, mid-rise and high-rise reinforced concrete frames located on Type-3 soil, increasing the distance of the transverse reinforcement can increase the maximum inter-story drift ratio values up to 37%. According to the existing results on 5, 10, and 15-story reinforced concrete models located on Type-3 soil, records with characteristics such as fling-step and directivity create maximum drift values between floors more than far-fault earthquakes. The results indicated that in the case of seismic excitation modes under earthquake encompassing directivity or fling-step, the probability values of failure and failure possibility increasing rate values are much smaller than the corresponding values of far-fault earthquakes. However, in near-fault frame records, the probability of exceedance occurs at lower seismic intensities compared to far-fault records.

Keywords: Directivity, fling-step, fragility curve, IDA, inter story drift ratio.v

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 364
658 Considerations for Effectively Using Probability of Failure as a Means of Slope Design Appraisal for Homogeneous and Heterogeneous Rock Masses

Authors: Neil Bar, Andrew Heweston

Abstract:

Probability of failure (PF) often appears alongside factor of safety (FS) in design acceptance criteria for rock slope, underground excavation and open pit mine designs. However, the design acceptance criteria generally provide no guidance relating to how PF should be calculated for homogeneous and heterogeneous rock masses, or what qualifies a ‘reasonable’ PF assessment for a given slope design. Observational and kinematic methods were widely used in the 1990s until advances in computing permitted the routine use of numerical modelling. In the 2000s and early 2010s, PF in numerical models was generally calculated using the point estimate method. More recently, some limit equilibrium analysis software offer statistical parameter inputs along with Monte-Carlo or Latin-Hypercube sampling methods to automatically calculate PF. Factors including rock type and density, weathering and alteration, intact rock strength, rock mass quality and shear strength, the location and orientation of geologic structure, shear strength of geologic structure and groundwater pore pressure influence the stability of rock slopes. Significant engineering and geological judgment, interpretation and data interpolation is usually applied in determining these factors and amalgamating them into a geotechnical model which can then be analysed. Most factors are estimated ‘approximately’ or with allowances for some variability rather than ‘exactly’. When it comes to numerical modelling, some of these factors are then treated deterministically (i.e. as exact values), while others have probabilistic inputs based on the user’s discretion and understanding of the problem being analysed. This paper discusses the importance of understanding the key aspects of slope design for homogeneous and heterogeneous rock masses and how they can be translated into reasonable PF assessments where the data permits. A case study from a large open pit gold mine in a complex geological setting in Western Australia is presented to illustrate how PF can be calculated using different methods and obtain markedly different results. Ultimately sound engineering judgement and logic is often required to decipher the true meaning and significance (if any) of some PF results.

Keywords: Probability of failure, point estimate method, Monte-Carlo simulations, sensitivity analysis, slope stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1196
657 Nanostructure of Gamma-Alumina Prepared by a Modified Sol-Gel Technique

Authors: Débora N. Zambrano, Marina O. Gosatti, Leandro M. Dufou, Daniel A. Serrano, M. Mónica Guraya, Soledad Perez-Catán

Abstract:

Nanoporous g-Al2O3 samples were synthesized via a sol-gel technique, introducing changes in the Yoldas´ method. The aim of the work was to achieve an effective control of the nanostructure properties and morphology of the final g-Al2O3. The influence of the reagent temperature during the hydrolysis was evaluated in case of water at 5 ºC and 98 ºC, and alkoxide at -18 ºC and room temperature. Sol-gel transitions were performed at 120 ºC and room temperature. All g-Al2O3 samples were characterized by X-ray diffraction, nitrogen adsorption and thermal analysis. Our results showed that temperature of both water and alkoxide has not much influence on the nanostructure of the final g-Al2O3, thus giving a structure very similar to that of samples obtained by the reference method as long as the reaction temperature above 75 ºC is reached soon enough. XRD characterization showed diffraction patterns corresponding to g-Al2O3 for all samples. Also BET specific area values (253-280 m2/g) were similar to those obtained by Yoldas’s original method. The temperature of the sol-gel transition does not affect the resulting sample structure, and crystalline boehmite particles were identified in all dried gels. We analyzed the reproducibility of the samples’ structure by preparing different samples under identical conditions; we found that performing the sol-gel transition at 120 ºC favors the production of more reproducible samples and also reduces significantly the time of the sol-gel reaction.

Keywords: Nanostructure alumina, boehmite, sol-gel technique, N2 adsorption/desorption isotherm, pore size distribution, BET area.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1342
656 Basic Science Medical Students’ Perception of a Formative Peer Assessment Model for Reinforcing the Learning of Physical Examination Skills During the COVID-19 Pandemic Online Learning Period

Authors: Neilal A. Isaac, Madison Edwards, Kirthana Sugunathevan, Mohan Kumar

Abstract:

The COVID-19 pandemic challenged the education system and forced medical schools to transition to online learning. With this transition, one of the major concerns for students and educators was to ensure that Physical Examination (PE) skills were still being mastered. Thus, the formative peer assessment model was designed to enhance the learning of PE skills during the COVID-19 pandemic in the online learning landscape. Year 1 and year 2 students enrolled in clinical skills courses at the University of Medicine and Health Sciences, St. Kitts were asked to record themselves demonstrating PE skills with a healthy patient volunteer after every skills class. Each student was assigned to exchange feedback with one peer in the course. At the end of the first two semesters of this learning activity, a cross-sectional survey was conducted for the two cohorts of year-1 and year-2 students. The year-1 cohorts most frequently rated the peer assessment exercise as 4 on a 5-point Likert scale, with a mean score of 3.317 [2.759, 3.875]. The year-2 cohorts most frequently rated the peer assessment exercise as 4 on a 5-point Likert scale, with a mean score of 3.597 [2.978, 4.180]. Students indicated that guidance from faculty, flexible deadlines, and detailed and timely feedback from peers were areas for improvement in this process.

Keywords: COVID-19 pandemic, distant learning, online medical education, peer assessment, physical examination.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 386
655 A Novel Neighborhood Defined Feature Selection on Phase Congruency Images for Recognition of Faces with Extreme Variations

Authors: Satyanadh Gundimada, Vijayan K Asari

Abstract:

A novel feature selection strategy to improve the recognition accuracy on the faces that are affected due to nonuniform illumination, partial occlusions and varying expressions is proposed in this paper. This technique is applicable especially in scenarios where the possibility of obtaining a reliable intra-class probability distribution is minimal due to fewer numbers of training samples. Phase congruency features in an image are defined as the points where the Fourier components of that image are maximally inphase. These features are invariant to brightness and contrast of the image under consideration. This property allows to achieve the goal of lighting invariant face recognition. Phase congruency maps of the training samples are generated and a novel modular feature selection strategy is implemented. Smaller sub regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are arranged in the order of increasing distance between the sub regions involved in merging. The assumption behind the proposed implementation of the region merging and arrangement strategy is that, local dependencies among the pixels are more important than global dependencies. The obtained feature sets are then arranged in the decreasing order of discriminating capability using a criterion function, which is the ratio of the between class variance to the within class variance of the sample set, in the PCA domain. The results indicate high improvement in the classification performance compared to baseline algorithms.

Keywords: Discriminant analysis, intra-class probability distribution, principal component analysis, phase congruency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1849
654 Signing the First Packet in Amortization Scheme for Multicast Stream Authentication

Authors: Mohammed Shatnawi, Qusai Abuein, Susumu Shibusawa

Abstract:

Signature amortization schemes have been introduced for authenticating multicast streams, in which, a single signature is amortized over several packets. The hash value of each packet is computed, some hash values are appended to other packets, forming what is known as hash chain. These schemes divide the stream into blocks, each block is a number of packets, the signature packet in these schemes is either the first or the last packet of the block. Amortization schemes are efficient solutions in terms of computation and communication overhead, specially in real-time environment. The main effictive factor of amortization schemes is it-s hash chain construction. Some studies show that signing the first packet of each block reduces the receiver-s delay and prevents DoS attacks, other studies show that signing the last packet reduces the sender-s delay. To our knowledge, there is no studies that show which is better, to sign the first or the last packet in terms of authentication probability and resistance to packet loss. In th is paper we will introduce another scheme for authenticating multicast streams that is robust against packet loss, reduces the overhead, and prevents the DoS attacks experienced by the receiver in the same time. Our scheme-The Multiple Connected Chain signing the First packet (MCF) is to append the hash values of specific packets to other packets,then append some hashes to the signature packet which is sent as the first packet in the block. This scheme is aspecially efficient in terms of receiver-s delay. We discuss and evaluate the performance of our proposed scheme against those that sign the last packet of the block.

Keywords: multicast stream authentication, hash chain construction, signature amortization, authentication probability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1516
653 Thermo-Mechanical Approach to Evaluate Softening Behavior of Polystyrene: Validation and Modeling

Authors: Salah Al-Enezi, Rashed Al-Zufairi, Naseer Ahmad

Abstract:

A Thermo-mechanical technique was developed to determine softening point temperature/glass transition temperature (Tg) of polystyrene exposed to high pressures. The design utilizes the ability of carbon dioxide to lower the glass transition temperature of polymers and acts as plasticizer. In this apparatus, the sorption of carbon dioxide to induce softening of polymers as a function of temperature/pressure is performed and the extent of softening is measured in three-point-flexural-bending mode. The polymer strip was placed in the cell in contact with the linear variable differential transformer (LVDT). CO2 was pumped into the cell from a supply cylinder to reach high pressure. The results clearly showed that full softening point of the samples, accompanied by a large deformation on the polymer strip. The deflection curves are initially relatively flat and then undergo a dramatic increase as the temperature is elevated. It was found that increasing the pressure of CO2 causes the temperature curves to shift from higher to lower by increment of about 45 K, over the pressure range of 0-120 bars. The obtained experimental Tg values were validated with the values reported in the literature. Finally, it is concluded that the defection model fits consistently to the generated experimental results, which attempts to describe in more detail how the central deflection of a thin polymer strip affected by the CO2 diffusions in the polymeric samples.

Keywords: Softening, high-pressure, polystyrene, CO2 diffusions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 661
652 The Threats of Deforestation, Forest Fire, and CO2 Emission toward Giam Siak Kecil Bukit Batu Biosphere Reserve in Riau, Indonesia

Authors: S. B. Rushayati, R. Meilani, R. Hermawan

Abstract:

A biosphere reserve is developed to create harmony amongst economic development, community development, and environmental protection, through partnership between human and nature. Giam Siak Kecil Bukit Batu Biosphere Reserve (GSKBB BR) in Riau Province, Indonesia, is unique in that it has peat soil dominating the area, many springs essential for human livelihood, high biodiversity. Furthermore, it is the only biosphere reserve covering privately managed production forest areas. In this research, we aimed at analyzing the threat of deforestation and forest fire, and the potential of CO2 emission at GSKBB BR. We used Landsat image, arcView software, and ERDAS IMAGINE 8.5 Software to conduct spatial analysis of land cover and land use changes, calculated CO2 emission based on emission potential from each land cover and land use type, and exercised simple linear regression to demonstrate the relation between CO2 emission potential and deforestation. The result showed that, beside in the buffer zone and transition area, deforestation also occurred in the core area. Spatial analysis of land cover and land use changes from years 2010, 2012, and 2014 revealed that there were changes of land cover and land use from natural forest and industrial plantation forest to other land use types, such as garden, mixed garden, settlement, paddy fields, burnt areas, and dry agricultural land. Deforestation in core area, particularly at the Giam Siak Kecil Wildlife Reserve and Bukit Batu Wildlife Reserve, occurred in the form of changes from natural forest in to garden, mixed garden, shrubs, swamp shrubs, dry agricultural land, open area, and burnt area. In the buffer zone and transition area, changes also happened, what once swamp forest changed into garden, mixed garden, open area, shrubs, swamp shrubs, and dry agricultural land. Spatial analysis on land cover and land use changes indicated that deforestation rate in the biosphere reserve from 2010 to 2014 had reached 16 119 ha/year. Beside deforestation, threat toward the biosphere reserve area also came from forest fire. The occurrence of forest fire in 2014 had burned 101 723 ha of the area, in which 9 355 ha of core area, and 92 368 ha of buffer zone and transition area. Deforestation and forest fire had increased CO2 emission as much as 24 903 855 ton/year.

Keywords: Biosphere reserve, CO2 emission, deforestation, forest fire.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2144
651 Determination of Some Physical and Mechanical Properties of Pofaki Variety of Pea

Authors: M. Azadbakht, E. Ghajarjazi, E. Amiri, F. Abdigaol

Abstract:

In this research the effect of moisture at three levels (47, 57, and 67 w.b.%) on the physical properties of the Pofaki pea variety including, dimensions, geometric mean diameter, volume, sphericity index and the surface area was determined. The influence of different moisture levels (47, 57 and 67 w.b.%), in two loading orientation (longitudinal and transverse) and three loading speed (4,6 and 8 mm min-1) on the mechanical properties of pea such as maximum deformation, rupture force, rupture energy, toughness and the power to break the pea was investigated. It was observed in the physical properties that moisture changes were affective at 1% on, dimensions, geometric mean diameter, volume, sphericity index and the surface area. It was observed in the mechanical properties that moisture changes were effective at 1% on, maximum deformation, rupture force, rupture energy, toughness and the power to break. Loading speed was effective on maximum deformation, rupture force, rupture energy at 1% and it was effective on toughness at 5%. Loading orientation was effective on maximum deformation, rupture force, rupture energy, toughness at 1% and it was effective on power at 5%. The mutual effect of speed and orientation were effective on rupture energy at 1% and were effective on toughness at 5% probability. The mutual effect of moisture and speed were effective on rupture force and rupture energy at 1% and were effective on toughness 5% probability. The mutual effect of orientation and moisture on rupture energy and toughness were effective at 1%.

Keywords: Mechanical properties, Pea, Physical properties.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2339
650 Evaluation of the Mechanical Behavior of a Retaining Wall Structure on a Weathered Soil through Probabilistic Methods

Authors: P. V. S. Mascarenhas, B. C. P. Albuquerque, D. J. F. Campos, L. L. Almeida, V. R. Domingues, L. C. S. M. Ozelim

Abstract:

Retaining slope structures are increasingly considered in geotechnical engineering projects due to extensive urban cities growth. These kinds of engineering constructions may present instabilities over the time and may require reinforcement or even rebuilding of the structure. In this context, statistical analysis is an important tool for decision making regarding retaining structures. This study approaches the failure probability of the construction of a retaining wall over the debris of an old and collapsed one. The new solution’s extension length will be of approximately 350 m and will be located over the margins of the Lake Paranoá, Brasilia, in the capital of Brazil. The building process must also account for the utilization of the ruins as a caisson. A series of in situ and laboratory experiments defined local soil strength parameters. A Standard Penetration Test (SPT) defined the in situ soil stratigraphy. Also, the parameters obtained were verified using soil data from a collection of masters and doctoral works from the University of Brasília, which is similar to the local soil. Initial studies show that the concrete wall is the proper solution for this case, taking into account the technical, economic and deterministic analysis. On the other hand, in order to better analyze the statistical significance of the factor-of-safety factors obtained, a Monte Carlo analysis was performed for the concrete wall and two more initial solutions. A comparison between the statistical and risk results generated for the different solutions indicated that a Gabion solution would better fit the financial and technical feasibility of the project.

Keywords: Economical analysis, probability of failure, retaining walls, statistical analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1022