Search results for: Competitive Advantages.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1262

Search results for: Competitive Advantages.

242 Spread Spectrum Code Estimationby Particle Swarm Algorithm

Authors: Vahid R. Asghari, Mehrdad Ardebilipour

Abstract:

In the context of spectrum surveillance, a new method to recover the code of spread spectrum signal is presented, while the receiver has no knowledge of the transmitter-s spreading sequence. In our previous paper, we used Genetic algorithm (GA), to recover spreading code. Although genetic algorithms (GAs) are well known for their robustness in solving complex optimization problems, but nonetheless, by increasing the length of the code, we will often lead to an unacceptable slow convergence speed. To solve this problem we introduce Particle Swarm Optimization (PSO) into code estimation in spread spectrum communication system. In searching process for code estimation, the PSO algorithm has the merits of rapid convergence to the global optimum, without being trapped in local suboptimum, and good robustness to noise. In this paper we describe how to implement PSO as a component of a searching algorithm in code estimation. Swarm intelligence boasts a number of advantages due to the use of mobile agents. Some of them are: Scalability, Fault tolerance, Adaptation, Speed, Modularity, Autonomy, and Parallelism. These properties make swarm intelligence very attractive for spread spectrum code estimation. They also make swarm intelligence suitable for a variety of other kinds of channels. Our results compare between swarm-based algorithms and Genetic algorithms, and also show PSO algorithm performance in code estimation process.

Keywords: Code estimation, Particle Swarm Optimization(PSO), Spread spectrum.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2136
241 Game-Theory-Based on Downlink Spectrum Allocation in Two-Tier Networks

Authors: Yu Zhang, Ye Tian, Fang Ye Yixuan Kang

Abstract:

The capacity of conventional cellular networks has reached its upper bound and it can be well handled by introducing femtocells with low-cost and easy-to-deploy. Spectrum interference issue becomes more critical in peace with the value-added multimedia services growing up increasingly in two-tier cellular networks. Spectrum allocation is one of effective methods in interference mitigation technology. This paper proposes a game-theory-based on OFDMA downlink spectrum allocation aiming at reducing co-channel interference in two-tier femtocell networks. The framework is formulated as a non-cooperative game, wherein the femto base stations are players and frequency channels available are strategies. The scheme takes full account of competitive behavior and fairness among stations. In addition, the utility function reflects the interference from the standpoint of channels essentially. This work focuses on co-channel interference and puts forward a negative logarithm interference function on distance weight ratio aiming at suppressing co-channel interference in the same layer network. This scenario is more suitable for actual network deployment and the system possesses high robustness. According to the proposed mechanism, interference exists only when players employ the same channel for data communication. This paper focuses on implementing spectrum allocation in a distributed fashion. Numerical results show that signal to interference and noise ratio can be obviously improved through the spectrum allocation scheme and the users quality of service in downlink can be satisfied. Besides, the average spectrum efficiency in cellular network can be significantly promoted as simulations results shown.

Keywords: Femtocell networks, game theory, interference mitigation, spectrum allocation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 739
240 Frequency Response Analysis of Reinforced- Soil Retaining Walls with Polymeric Strips

Authors: Ali Komakpanah, Maryam Yazdi

Abstract:

Few studies have been conducted on polymeric strip and the behavior of soil retaining walls. This paper will present the effect of frequency on the dynamic behavior of reinforced soil retaining walls with polymeric strips. The frequency content describes how the amplitude of a ground motion is distributed among different frequencies. Since the frequency content of an earthquake motion will strongly influence the effects of that motion, the characterization of the motion cannot be completed without the consideration of its frequency content. The maximum axial force of reinforcements and horizontal displacement of the reinforced walls are focused in this research. To clarify the dynamic behavior of reinforced soil retaining walls with polymeric strips, a numerical modeling using Finite Difference Method is benefited. As the results indicate, the frequency of input base acceleration has an important effect on the behavior of these structures. Because of resonant in the system, where the frequency of the input dynamic load is equal to the natural frequency of the system, the maximum horizontal displacement and the maximum axial forces in polymeric strips is occurred. Moreover, they were to increase the structure flexibility because of the main advantages of polymeric strips; i.e. being simple method of construction, having a homogeneous behavior with soils, and possessing long durability, which are of great importance in dynamic analysis.

Keywords: dynamic analysis, frequency, polymeric strip, reinforced soil.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2250
239 JaCoText: A Pretrained Model for Java Code-Text Generation

Authors: Jessica Lòpez Espejel, Mahaman Sanoussi Yahaya Alassan, Walid Dahhane, El Hassane Ettifouri

Abstract:

Pretrained transformer-based models have shown high performance in natural language generation task. However, a new wave of interest has surged: automatic programming language generation. This task consists of translating natural language instructions to a programming code. Despite the fact that well-known pretrained models on language generation have achieved good performance in learning programming languages, effort is still needed in automatic code generation. In this paper, we introduce JaCoText, a model based on Transformers neural network. It aims to generate java source code from natural language text. JaCoText leverages advantages of both natural language and code generation models. More specifically, we study some findings from the state of the art and use them to (1) initialize our model from powerful pretrained models, (2) explore additional pretraining on our java dataset, (3) carry out experiments combining the unimodal and bimodal data in the training, and (4) scale the input and output length during the fine-tuning of the model. Conducted experiments on CONCODE dataset show that JaCoText achieves new state-of-the-art results.

Keywords: Java code generation, Natural Language Processing, Sequence-to-sequence Models, Transformers Neural Networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 855
238 Trade-off Between NOX, Soot and EGR Rates for an IDI Diesel Engine Fuelled with JB5

Authors: M. Gomaa, A. J. Alimin, K. A. Kamarudin

Abstract:

Nowadays, the focus on renewable energy and alternative fuels has increased due to increasing oil prices, environment pollution, and also concern on preserving the nature. Biodiesel has been known as an attractive alternative fuel although biodiesel produced from edible oil is very expensive than conventional diesel. Therefore, the uses of biodiesel produced from non-edible oils are much better option. Currently Jatropha biodiesel (JBD) is receiving attention as an alternative fuel for diesel engine. Biodiesel is non-toxic, biodegradable, high lubricant ability, highly renewable, and its use therefore produces real reduction in petroleum consumption and carbon dioxide (CO2) emissions. Although biodiesel has many advantages, but it still has several properties need to improve, such as lower calorific value, lower effective engine power, higher emission of nitrogen oxides (NOX) and greater sensitivity to low temperature. Exhaust gas recirculation (EGR) is effective technique to reduce NOX emission from diesel engines because it enables lower flame temperature and oxygen concentration in the combustion chamber. Some studies succeeded to reduce the NOX emission from biodiesel by EGR but they observed increasing soot emission. The aim of this study was to investigate the engine performance and soot emission by using blended Jatropha biodiesel with different EGR rates. A CI engine that is water-cooled, turbocharged, using indirect injection system was used for the investigation. Soot emission, NOX, CO2, carbon monoxide (CO) were recorded and various engine performance parameters were also evaluated.

Keywords: EGR, Jatropha biodiesel, NOX, Soot emission.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3292
237 Improvement of Parallel Compressor Model in Dealing Outlet Unequal Pressure Distribution

Authors: Kewei Xu, Jens Friedrich, Kevin Dwinger, Wei Fan, Xijin Zhang

Abstract:

Parallel Compressor Model (PCM) is a simplified approach to predict compressor performance with inlet distortions. In PCM calculation, it is assumed that the sub-compressors’ outlet static pressure is uniform and therefore simplifies PCM calculation procedure. However, if the compressor’s outlet duct is not long and straight, such assumption frequently induces error ranging from 10% to 15%. This paper provides a revised calculation method of PCM that can correct the error. The revised method employs energy equation, momentum equation and continuity equation to acquire needed parameters and replace the equal static pressure assumption. Based on the revised method, PCM is applied on two compression system with different blades types. The predictions of their performance in non-uniform inlet conditions are yielded through the revised calculation method and are employed to evaluate the method’s efficiency. Validating the results by experimental data, it is found that although little deviation occurs, calculated result agrees well with experiment data whose error ranges from 0.1% to 3%. Therefore, this proves the revised calculation method of PCM possesses great advantages in predicting the performance of the distorted compressor with limited exhaust duct.

Keywords: Parallel Compressor Model (PCM), Revised Calculation Method, Inlet Distortion, Outlet Unequal Pressure Distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1688
236 Utilizing the Analytic Hierarchy Process in Improving Performances of Blind Judo

Authors: Hyun Chul Cho, Hyunkyoung Oh, Hyun Yoon, Jooyeon Jin, Jae Won Lee

Abstract:

Identifying, structuring, and racking the most important factors related to improving athletes’ performances could pave the way for improve training system. The purpose of this study was to identify the relative importance factors to improve performance of the of judo athletes with visual impairments, including blindness by using the Analytic Hierarchy Process (AHP). After reviewing the literature, the relative importance of factors affecting performance of the blind judo was selected. A group of expert reviewed the first draft of the questionnaires, and then finally selected performance factors were classified into the major categories of techniques, physical fitness, and psychological categories. Later, a pre-selected experts group was asked to review the final version of questionnaire and confirm the priories of performance factors. The order of priority was determined by performing pairwise comparisons using Expert Choice 2000. Results indicated that “grappling” (.303) and “throwing” (.234) were the most important lower hierarchy factors for blind judo skills. In addition, the most important physical factors affecting performance were “muscular strength and endurance” (.238). Further, among other psychological factors “competitive anxiety” (.393) was important factor that affects performance. It is important to offer psychological skills training to reduce anxiety of judo athletes with visual impairments and blindness, so they can compete in their optimal states. These findings offer insights into what should be considered when determining factors to improve performance of judo athletes with visual impairments and blindness.

Keywords: Analytic hierarchy process, blind athlete, judo, sport performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 801
235 Efficient Real-time Remote Data Propagation Mechanism for a Component-Based Approach to Distributed Manufacturing

Authors: V. Barot, S. McLeod, R. Harrison, A. A. West

Abstract:

Manufacturing Industries face a crucial change as products and processes are required to, easily and efficiently, be reconfigurable and reusable. In order to stay competitive and flexible, situations also demand distribution of enterprises globally, which requires implementation of efficient communication strategies. A prototype system called the “Broadcaster" has been developed with an assumption that the control environment description has been engineered using the Component-based system paradigm. This prototype distributes information to a number of globally distributed partners via an adoption of the circular-based data processing mechanism. The work highlighted in this paper includes the implementation of this mechanism in the domain of the manufacturing industry. The proposed solution enables real-time remote propagation of machine information to a number of distributed supply chain client resources such as a HMI, VRML-based 3D views and remote client instances regardless of their distribution nature and/ or their mechanisms. This approach is presented together with a set of evaluation results. Authors- main concentration surrounds the reliability and the performance metric of the adopted approach. Performance evaluation is carried out in terms of the response times taken to process the data in this domain and compared with an alternative data processing implementation such as the linear queue mechanism. Based on the evaluation results obtained, authors justify the benefits achieved from this proposed implementation and highlight any further research work that is to be carried out.

Keywords: Broadcaster, circular buffer, Component-based, distributed manufacturing, remote data propagation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1373
234 Corporate Social Responsibility in an Experimental Market

Authors: Nikolaos Georgantzis, Efi Vasileiou

Abstract:

We present results from experimental price-setting oligopolies in which green firms undertake different levels of energy-saving investments motivated by public subsidies and demand-side advantages. We find that consumers reveal higher willingness to pay for greener sellers’ products. This observation in conjunction to the fact that greener sellers set higher prices is compatible with the use and interpretation of energy-saving behaviour as a differentiation strategy. However, sellers do not exploit the resulting advantage through sufficiently high price-cost margins, because they seem trapped into “run to stay still” competition. Regarding the use of public subsidies to energy-saving sellers we uncover an undesirable crowding-out effect of consumers’ intrinsic tendency to support green manufacturers. Namely, consumers may be less willing to support a green seller whose energy-saving strategy entails a direct financial benefit. Finally, we disentangle two alternative motivations for consumer’s attractions to pro-social firms; first, the self-interested recognition of the firm’s contribution to the public and private welfare and, second, the need to compensate a firm for the cost entailed in each pro-social action. Our results show the prevalence of the former over the latter.

Keywords: Corporate social responsibility, energy savings, public good, experiments, vertical differentiation, altruism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2115
233 On Pooling Different Levels of Data in Estimating Parameters of Continuous Meta-Analysis

Authors: N. R. N. Idris, S. Baharom

Abstract:

A meta-analysis may be performed using aggregate data (AD) or an individual patient data (IPD). In practice, studies may be available at both IPD and AD level. In this situation, both the IPD and AD should be utilised in order to maximize the available information. Statistical advantages of combining the studies from different level have not been fully explored. This study aims to quantify the statistical benefits of including available IPD when conducting a conventional summary-level meta-analysis. Simulated meta-analysis were used to assess the influence of the levels of data on overall meta-analysis estimates based on IPD-only, AD-only and the combination of IPD and AD (mixed data, MD), under different study scenario. The percentage relative bias (PRB), root mean-square-error (RMSE) and coverage probability were used to assess the efficiency of the overall estimates. The results demonstrate that available IPD should always be included in a conventional meta-analysis using summary level data as they would significantly increased the accuracy of the estimates.On the other hand, if more than 80% of the available data are at IPD level, including the AD does not provide significant differences in terms of accuracy of the estimates. Additionally, combining the IPD and AD has moderating effects on the biasness of the estimates of the treatment effects as the IPD tends to overestimate the treatment effects, while the AD has the tendency to produce underestimated effect estimates. These results may provide some guide in deciding if significant benefit is gained by pooling the two levels of data when conducting meta-analysis.

Keywords: Aggregate data, combined-level data, Individual patient data, meta analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1740
232 Oxygen Transfer by Multiple Inclined Plunging Water Jets

Authors: Surinder Deswal

Abstract:

There has been a growing interest in the oxygenation by plunging water jets in the last few years due to their inherent advantages, like energy-efficient, low operation cost, etc. Though a lot of work has been reported on the oxygen-transfer by single plunging water jets but very few studies have been carried out using multiple plunging jets. In this paper, volumetric oxygen-transfer coefficient and oxygen-transfer efficiency has been studied experimentally for multiple inclined plunging jets (having jet plunge angle of 60 0 ) in a pool of water for different configurations, in terms of varying number of jets and jet diameters. This research suggests that the volumetric oxygen-transfer coefficient and oxygentransfer efficiency of the multiple inclined plunging jets for air-water system are significantly higher than those of a single vertical as well as inclined plunging jet for same flow area and other similar conditions. The study also reveals that the oxygen-transfer increase with increase in number of multiple jets under similar conditions, which will be most advantageous and energy-efficient in practical situations when large volumes of wastewaters are to be treated. A relationship between volumetric oxygen-transfer coefficient and jet parameters is also proposed. The suggested relationship predicts the volumetric oxygen-transfer coefficient for multiple inclined plunging jet(s) within a scatter of ±15 percent. The relationship will be quite useful in scale-up and in deciding optimum configuration of multiple inclined plunging jet aeration system.

Keywords: Multiple inclined plunging jets, jet plunge angle, volumetric oxygen-transfer coefficient, oxygen-transfer efficiency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1770
231 Modeling of Microelectromechanical Systems Diaphragm Based Acoustic Sensor

Authors: Vasudha Hegde, Narendra Chaulagain, H. M. Ravikumar, Sonu Mishra, Siva Yellampalli

Abstract:

Acoustic sensors are extensively used in recent days not only for sensing and condition monitoring applications but also for small scale energy harvesting applications to power wireless sensor networks (WSN) due to their inherent advantages. The natural frequency of the structure plays a major role in energy harvesting applications since the sensor key element has to operate at resonant frequency. In this paper, circular diaphragm based MEMS acoustic sensor is modelled by Lumped Element Model (LEM) and the natural frequency is compared with the simulated model using Finite Element Method (FEM) tool COMSOL Multiphysics. The sensor has the circular diaphragm of 3000 µm radius and thickness of 30 µm to withstand the high SPL (Sound Pressure Level) and also to withstand the various fabrication steps. A Piezoelectric ZnO layer of thickness of 1 µm sandwiched between two aluminium electrodes of thickness 0.5 µm and is coated on the diaphragm. Further, a channel with radius 3000 µm radius and length 270 µm is connected at the bottom of the diaphragm. The natural frequency of the structure by LEM method is approximately 16.6 kHz which is closely matching with that of simulated structure with suitable approximations.

Keywords: Acoustic sensor, diaphragm based, lumped element modeling, natural frequency, piezoelectric.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1029
230 Experiment Study on the Influence of Tool Materials on the Drilling of Thick Stacked Plate of 2219 Aluminum Alloy

Authors: G. H. Li, M. Liu, H. J. Qi, Q. Zhu, W. Z. He

Abstract:

The drilling and riveting processes are widely used in the assembly of carrier rocket, which makes the efficiency and quality of drilling become the important factor affecting the assembly process. According to the problem existing in the drilling of thick stacked plate (thickness larger than 10mm) of carrier rocket, such as drill break, large noise and burr etc., experimental study of the influence of tool material on the drilling was carried out. The cutting force was measured by a piezoelectric dynamometer, the aperture was measured with an outline projector, and the burr is observed and measured by a digital stereo microscope. Through the measurement, the effects of tool material on the drilling were analyzed from the aspects of drilling force, diameter, and burr. The results show that, compared with carbide drill and coated carbide one, the drilling force of high speed steel is larger. But, the application of high speed steel also has some advantages, e.g. a higher number of hole can be obtained, the height of burr is small, the exit is smooth and the slim burr is less, and the tool experiences wear but not fracture. Therefore, the high speed steel tool is suitable for the drilling of thick stacked plate of 2219 Aluminum alloy.

Keywords: 2219 aluminum alloy, thick stacked plate, drilling, tool material.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1283
229 Conversion in Chemical Reactors using Hollow Cylindrical Catalyst Pellet

Authors: Mohammad Asif

Abstract:

Heterogeneous catalysis is vital for a number of chemical, refinery and pollution control processes. The use of catalyst pellets of hollow cylindrical shape provide several distinct advantages over other common shapes, and can therefore help to enhance conversion levels in reactors. A better utilization of the catalytic material is probably most notable of these features due to the absence of the pellet core, which helps to significantly lower the effect of the internal transport resistance. This is reflected in the enhancement of the effectiveness factor. For the case of the first order irreversible kinetics, a substantial increase in the effectiveness factor can be obtained by varying shape parameters. Important shape parameters of a finite hollow cylinder are the ratio of the inside to the outside radii (κ) and the height to the diameter ratio (γ). A high value of κ the generally helps to enhance the effectiveness factor. On the other hand, lower values of the effectiveness factors are obtained when the dimension of the height and the diameter are comparable. Thus, the departure of parameter γ from the unity favors higher effectiveness factor. Since a higher effectiveness factor is a measure of a greater utilization of the catalytic material, higher conversion levels can be achieved using the hollow cylindrical pellets possessing optimized shape parameters.

Keywords: Finite hollow cylinder, Catalyst pellet, Effectiveness factor, Thiele Modulus, Conversion

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3707
228 A Renovated Cook's Distance Based On The Buckley-James Estimate In Censored Regression

Authors: Nazrina Aziz, Dong Q. Wang

Abstract:

There have been various methods created based on the regression ideas to resolve the problem of data set containing censored observations, i.e. the Buckley-James method, Miller-s method, Cox method, and Koul-Susarla-Van Ryzin estimators. Even though comparison studies show the Buckley-James method performs better than some other methods, it is still rarely used by researchers mainly because of the limited diagnostics analysis developed for the Buckley-James method thus far. Therefore, a diagnostic tool for the Buckley-James method is proposed in this paper. It is called the renovated Cook-s Distance, (RD* i ) and has been developed based on the Cook-s idea. The renovated Cook-s Distance (RD* i ) has advantages (depending on the analyst demand) over (i) the change in the fitted value for a single case, DFIT* i as it measures the influence of case i on all n fitted values Yˆ∗ (not just the fitted value for case i as DFIT* i) (ii) the change in the estimate of the coefficient when the ith case is deleted, DBETA* i since DBETA* i corresponds to the number of variables p so it is usually easier to look at a diagnostic measure such as RD* i since information from p variables can be considered simultaneously. Finally, an example using Stanford Heart Transplant data is provided to illustrate the proposed diagnostic tool.

Keywords: Buckley-James estimators, censored regression, censored data, diagnostic analysis, product-limit estimator, renovated Cook's Distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1438
227 The Client-Supplier Relationship in Managing Innovation: Delineating Defence Industry First Mover Challenges within the Government Contract Competition

Authors: Edward Pol

Abstract:

All companies are confronted with the need to innovate in order to meet market demands. In so doing they are challenged with the dilemma of whether to aim to be first into the market with a new innovative product, or to deliberately wait and learn from a pioneers’ mistakes; potentially avoiding higher risks. It is therefore important to critically understand from a first mover advantage and disadvantage perspective the decision-making implications of defence industry transformation onset by an innovative paradigm shift. This paper will argue that the type of industry characteristics matter, especially when considering what role the clients play in the innovation process and what their level of influence is. Through investigation of qualitative case study research, this inquiry will focus on first mover advantages and first mover disadvantages with a view to establish practical and value-added academic findings by focusing on specific industries where the clients play an active role in cooperation with the supplier innovation. The resulting findings will help managers to mitigate risk in innovative technology introduction. A selection from several defence industry innovations is specifically chosen because of the client–supplier relationship that typically differs from traditional first mover research. In this instance, case studies will be used referencing vertical-take-off-and-landing defence equipment innovations. 

Keywords: innovation, pioneer, first mover advantage, first mover disadvantage, risk

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 466
226 Transmission Line Congestion Management Using Hybrid Fish-Bee Algorithm with Unified Power Flow Controller

Authors: P. Valsalal, S. Thangalakshmi

Abstract:

There is a widespread changeover in the electrical power industry universally from old-style monopolistic outline towards a horizontally distributed competitive structure to come across the demand of rising consumption. When the transmission lines of derestricted system are incapable to oblige the entire service needs, the lines are overloaded or congested. The governor between customer and power producer is nominated as Independent System Operator (ISO) to lessen the congestion without obstructing transmission line restrictions. Among the existing approaches for congestion management, the frequently used approaches are reorganizing the generation and load curbing. There is a boundary for reorganizing the generators, and further loads may not be supplemented with the prevailing resources unless more private power producers are added in the system by considerably raising the cost. Hence, congestion is relaxed by appropriate Flexible AC Transmission Systems (FACTS) devices which boost the existing transfer capacity of transmission lines. The FACTs device, namely, Unified Power Flow Controller (UPFC) is preferred, and the correct placement of UPFC is more vital and should be positioned in the highly congested line. Hence, the weak line is identified by using power flow performance index with the new objective function with proposed hybrid Fish – Bee algorithm. Further, the location of UPFC at appropriate line reduces the branch loading and minimizes the voltage deviation. The power transfer capacity of lines is determined with and without UPFC in the identified congested line of IEEE 30 bus structure and the simulated results are compared with prevailing algorithms. It is observed that the transfer capacity of existing line is increased with the presented algorithm and thus alleviating the congestion.

Keywords: Available line transfer capability, congestion management, FACTS device, hybrid fish-bee algorithm, ISO, UPFC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1579
225 Economical and Technical Analysis of Urban Transit System Selection Using TOPSIS Method According to Constructional and Operational Aspects

Authors: Ali Abdi Kordani, Meysam Rooyintan, Sid Mohammad Boroomandrad

Abstract:

Nowadays, one the most important problems in megacities is public transportation and satisfying citizens from this system in order to decrease the traffic congestions and air pollution. Accordingly, to improve the transit passengers and increase the travel safety, new transportation systems such as Bus Rapid Transit (BRT), tram, and monorail have expanded that each one has different merits and demerits. That is why comparing different systems for a systematic selection of public transportation systems in a big city like Tehran, which has numerous problems in terms of traffic and pollution, is essential. In this paper, it is tried to investigate the advantages and feasibility of using monorail, tram and BRT systems, which are widely used in most of megacities in all over the world. In Tehran, by using SPSS statistical analysis software and TOPSIS method, these three modes are compared to each other and their results will be assessed. Experts, who are experienced in the transportation field, answer the prepared matrix questionnaire to select each public transportation mode (tram, monorail, and BRT). The results according to experts’ judgments represent that monorail has the first priority, Tram has the second one, and BRT has the third one according to the considered indices like execution costs, wasting time, depreciation, pollution, operation costs, travel time, passenger satisfaction, benefit to cost ratio and traffic congestion.

Keywords: Bus Rapid Transit, Costs, Monorail, Pollution, Tram.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 677
224 Advantages of Combining Solar Greenhouse System and Trombe Wall in Hot and Dry Climate and Housing Design: The Case of Isfahan

Authors: Yalda Safaralipour, Seyed Ahmad Shahgoli

Abstract:

Nowadays over-consumption of fossil energy in buildings especially in residential buildings and also considering the increase in populations, the crisis of energy shortage in a near future is predictable. The recent performance of developed countries in construction with the aim of decreasing fossil energies shows that these countries have understood the incoming crisis and has taken reasonable and basic actions in this regard. However, Iranian architecture, with several thousands years of history, has acquired and executed invaluable experiences in designing, adapting and coordinating with the nature. Architectural studies during the recent decades show that imitating modern western architecture results in high energy wastage beside the fact that it not reasonably adaptable and corresponded with the habits and customs of people unlike the architecture in the past which was compatible and adaptable with the climatic conditions and this necessitates optimal using of renewable energies more than ever. This paper studies problems of design, execution and living in today's houses and reviews the characteristics of climatic elements paying special attention to the performance of trombe wall and solar greenhouse in traditional houses and offers some suggestions for combining these two elements and a climatic strategy.

Keywords: Climatic Designing, Housing in Hot & Dry Area, Solar Greenhouse, Trombe Wall.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2375
223 Improvement of Voltage Profile of Grid Integrated Wind Distributed Generation by SVC

Authors: Fariba Shavakhi Zavareh, Hadi Fotoohabadi, Reza Sedaghati

Abstract:

Due to the continuous increment of the load demand, identification of weaker buses, improvement of voltage profile and power losses in the context of the voltage stability problems has become one of the major concerns for the larger, complex, interconnected power systems. The objective of this paper is to review the impact of Flexible AC Transmission System (FACTS) controller in Wind generators connected electrical network for maintaining voltage stability. Wind energy could be the growing renewable energy due to several advantages. The influence of wind generators on power quality is a significant issue; non uniform power production causes variations in system voltage and frequency. Therefore, wind farm requires high reactive power compensation; the advances in high power semiconducting devices have led to the development of FACTS. The FACTS devices such as for example SVC inject reactive power into the system which helps in maintaining a better voltage profile. The performance is evaluated on an IEEE 14 bus system, two wind generators are connected at low voltage buses to meet the increased load demand and SVC devices are integrated at the buses with wind generators to keep voltage stability. Power flows, nodal voltage magnitudes and angles of the power network are obtained by iterative solutions using MIPOWER.

Keywords: Voltage Profile, FACTS Device, SVC, Distributed Generation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2663
222 Analysis of Hard Turning Process of AISI D3-Thermal Aspects

Authors: B. Varaprasad, C. Srinivasa Rao

Abstract:

In the manufacturing sector, hard turning has emerged as vital machining process for cutting hardened steels. Besides many advantages of hard turning operation, one has to implement to achieve close tolerances in terms of surface finish, high product quality, reduced machining time, low operating cost and environmentally friendly characteristics. In the present study, three-dimensional CAE (Computer Aided Engineering) based simulation of  hard turning by using commercial software DEFORM 3D has been compared to experimental results of  stresses, temperatures and tool forces in machining of AISI D3 steel using mixed Ceramic inserts (CC6050). In the present analysis, orthogonal cutting models are proposed, considering several processing parameters such as cutting speed, feed, and depth of cut. An exhaustive friction modeling at the tool-work interfaces is carried out. Work material flow around the cutting edge is carefully modeled with adaptive re-meshing simulation capability. In process simulations, feed rate and cutting speed are constant (i.e.,. 0.075 mm/rev and 155 m/min), and analysis is focused on stresses, forces, and temperatures during machining. Close agreement is observed between CAE simulation and experimental values.

Keywords: Hard-turning, computer-aided engineering, computational machining, finite element method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1353
221 Discovering Complex Regularities: from Tree to Semi-Lattice Classifications

Authors: A. Faro, D. Giordano, F. Maiorana

Abstract:

Data mining uses a variety of techniques each of which is useful for some particular task. It is important to have a deep understanding of each technique and be able to perform sophisticated analysis. In this article we describe a tool built to simulate a variation of the Kohonen network to perform unsupervised clustering and support the entire data mining process up to results visualization. A graphical representation helps the user to find out a strategy to optimize classification by adding, moving or delete a neuron in order to change the number of classes. The tool is able to automatically suggest a strategy to optimize the number of classes optimization, but also support both tree classifications and semi-lattice organizations of the classes to give to the users the possibility of passing from one class to the ones with which it has some aspects in common. Examples of using tree and semi-lattice classifications are given to illustrate advantages and problems. The tool is applied to classify macroeconomic data that report the most developed countries- import and export. It is possible to classify the countries based on their economic behaviour and use the tool to characterize the commercial behaviour of a country in a selected class from the analysis of positive and negative features that contribute to classes formation. Possible interrelationships between the classes and their meaning are also discussed.

Keywords: Unsupervised classification, Kohonen networks, macroeconomics, Visual data mining, Cluster interpretation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1542
220 PSO Based Weight Selection and Fixed Structure Robust Loop Shaping Control for Pneumatic Servo System with 2DOF Controller

Authors: Randeep Kaur, Jyoti Ohri

Abstract:

This paper proposes a new technique to design a fixed-structure robust loop shaping controller for the pneumatic servosystem. In this paper, a new method based on a particle swarm optimization (PSO) algorithm for tuning the weighting function parameters to design an H∞ controller is presented. The PSO algorithm is used to minimize the infinity norm of the transfer function of the nominal closed loop system to obtain the optimal parameters of the weighting functions. The optimal stability margin is used as an objective in PSO for selecting the optimal weighting parameters; it is shown that the proposed method can simplify the design procedure of H∞ control to obtain optimal robust controller for pneumatic servosystem. In addition, the order of the proposed controller is much lower than that of the conventional robust loop shaping controller, making it easy to implement in practical works. Also two-degree-of-freedom (2DOF) control design procedure is proposed to improve tracking performance in the face of noise and disturbance. Result of simulations demonstrates the advantages of the proposed controller in terms of simple structure and robustness against plant perturbations and disturbances.

Keywords: Robust control, Pneumatic Servosystem, PSO, H∞ control, 2DOF.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2424
219 A Generic Middleware to Instantly Sync Intensive Writes of Heterogeneous Massive Data via Internet

Authors: Haitao Yang, Zhenjiang Ruan, Fei Xu, Lanting Xia

Abstract:

Industry data centers often need to sync data changes reliably and instantly from a large-scale of heterogeneous autonomous relational databases accessed via the not-so-reliable Internet, for which a practical generic sync middleware of low maintenance and operation costs is most wanted. To this demand, this paper presented a generic sync middleware system (GSMS), which has been developed, applied and optimized since 2006, holding the principles or advantages that it must be SyncML-compliant and transparent to data application layer logic without referring to implementation details of databases synced, does not rely on host computer operating systems deployed, and its construction is light weighted and hence of low cost. Regarding these hard commitments of developing GSMS, in this paper we stressed the significant optimization breakthrough of GSMS sync delay being well below a fraction of millisecond per record sync. A series of ultimate tests with GSMS sync performance were conducted for a persuasive example, in which the source relational database underwent a broad range of write loads (from one thousand to one million intensive writes within a few minutes). All these tests showed that the performance of GSMS is competent and smooth even under ultimate write loads.

Keywords: Heterogeneous massive data, instantly sync intensive writes, Internet generic middleware design, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 454
218 Blueprinting of a Normalized Supply Chain Processes: Results in Implementing Normalized Software Systems

Authors: Bassam Istanbouli

Abstract:

With the technology evolving every day and with the increase in global competition, industries are always under the pressure to be the best. They need to provide good quality products at competitive prices, when and how the customer wants them.  In order to achieve this level of service, products and their respective supply chain processes need to be flexible and evolvable; otherwise changes will be extremely expensive, slow and with many combinatorial effects. Those combinatorial effects impact the whole organizational structure, from a management, financial, documentation, logistics and specially the information system Enterprise Requirement Planning (ERP) perspective. By applying the normalized system concept/theory to segments of the supply chain, we believe minimal effects, especially at the time of launching an organization global software project. The purpose of this paper is to point out that if an organization wants to develop a software from scratch or implement an existing ERP software for their business needs and if their business processes are normalized and modular then most probably this will yield to a normalized and modular software system that can be easily modified when the business evolves. Another important goal of this paper is to increase the awareness regarding the design of the business processes in a software implementation project. If the blueprints created are normalized then the software developers and configurators will use those modular blueprints to map them into modular software. This paper only prepares the ground for further studies;  the above concept will be supported by going through the steps of developing, configuring and/or implementing a software system for an organization by using two methods: The Software Development Lifecycle method (SDLC) and the Accelerated SAP implementation method (ASAP). Both methods start with the customer requirements, then blue printing of its business processes and finally mapping those processes into a software system.  Since those requirements and processes are the starting point of the implementation process, then normalizing those processes will end up in a normalizing software.

Keywords: Blueprint, ERP, SDLC, Modular.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 396
217 Students’ Level of Knowledge Construction and Pattern of Social Interaction in an Online Forum

Authors: K. Durairaj, I. N. Umar

Abstract:

The asynchronous discussion forum is one of the most widely used activities in learning management system environment. Online forum allows participants to interact, construct knowledge, and can be used to complement face to face sessions in blended learning courses. However, to what extent do the students perceive the benefits or advantages of forum remain to be seen. Through content and social network analyses, instructors will be able to gauge the students’ engagement and knowledge construction level. Thus, this study aims to analyze the students’ level of knowledge construction and their participation level that occur through online discussion. It also attempts to investigate the relationship between the level of knowledge construction and their social interaction patterns. The sample involves 23 students undertaking a master course in one public university in Malaysia. The asynchronous discussion forum was conducted for three weeks as part of the course requirement. The finding indicates that the level of knowledge construction is quite low. Also, the density value of 0.11 indicating the overall communication among the participants in the forum is low. This study reveals that strong and significant correlations between SNA measures (in-degree centrality, out-degree centrality) and level of knowledge construction. Thus, allocating these active students in different group aids the interactive discussion takes place. Finally, based upon the findings, some recommendations to increase students’ level of knowledge construction and also for further research are proposed.

Keywords: Asynchronous Discussion Forums, Content Analysis, Knowledge Construction, Social Network Analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2211
216 The Loess Regression Relationship Between Age and BMI for both Sydney World Masters Games Athletes and the Australian National Population

Authors: Joe Walsh, Mike Climstein, Ian Timothy Heazlewood, Stephen Burke, Jyrki Kettunen, Kent Adams, Mark DeBeliso

Abstract:

Thousands of masters athletes participate quadrennially in the World Masters Games (WMG), yet this cohort of athletes remains proportionately under-investigated. Due to a growing global obesity pandemic in context of benefits of physical activity across the lifespan, the BMI trends for this unique population was of particular interest. The nexus between health, physical activity and aging is complex and has raised much interest in recent times due to the realization that a multifaceted approach is necessary in order to counteract the obesity pandemic. By investigating age based trends within a population adhering to competitive sport at older ages, further insight might be gleaned to assist in understanding one of many factors influencing this relationship.BMI was derived using data gathered on a total of 6,071 masters athletes (51.9% male, 48.1% female) aged 25 to 91 years ( =51.5, s =±9.7), competing at the Sydney World Masters Games (2009). Using linear and loess regression it was demonstrated that the usual tendency for prevalence of higher BMI increasing with age was reversed in the sample. This trend in reversal was repeated for both male and female only sub-sets of the sample participants, indicating the possibility of improved prevalence of BMI with increasing age for both the sample as a whole and these individual sub-groups.This evidence of improved classification in one index of health (reduced BMI) for masters athletes (when compared to the general population) implies there are either improved levels of this index of health with aging due to adherence to sport or possibly the reduced BMI is advantageous and contributes to this cohort adhering (or being attracted) to masters sport at older ages.

Keywords: Aging, masters athlete, Quetelet Index, sport

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1712
215 Towards a New Era of Sustainability in the Automotive Industry: Strategic Human Resource Management and Green Technology Innovation

Authors: Reihaneh Montazeri Shatouri, Rosmini Omar, Kunio Igusa

Abstract:

Although automotive industry has brought different beneficiaries to human life, it is being pointed out as one of the major cause of global air pollution which resulted in climate change, smog, green house gases (GHGs), and human diseases by many reasons. Since auto industry is one of the largest consumers of fossil fuels, the realization of green innovations is becoming a crucial choice to meet the challenges towards sustainable development. Recently, many auto manufacturers have embarked on green technology initiatives to gain a competitive advantage in the global market; however, innovative manufacturing systems and technologies can enhance operational performance only if the human resource management is in place to elicit the motivation of the employees and develop their organizational expertise. No organization can perform at peak levels unless each employee is committed to the company goals and works as an effective team member. Strategic human resource practices are the primary means by which firms can shape the skills, attitudes, and behavior of individuals to align with the business strategic objectives. This study investigates on the comprehensive approach of multiple advanced technology innovations and human resource management at Toyota Motor Corporation as the market leader of full hybrid technology in the automotive industry. Then, HRM framework of the company is described and three sets of human resource practices that support the innovation-oriented HR system, presented. Finally, a conceptual framework for innovativeness in green technology in automotive industry by applying a deliberate strategic HR management system and knowledge management with the intervening factors of organizational culture, knowledge application and knowledge sharing is proposed.

Keywords: Automotive Industry, Green Technology, Innovation, Strategic Human Resource Management

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5259
214 The Effects of Mobile Phones in Mitigating Cultural Shock Amongst Refugees: Case of South Africa

Authors: Sarah Vuningoma, Maria Rosa Lorini, Wallace Chigona

Abstract:

The potential of mobile phones is evident in their ability to address isolation and loneliness, support the improvement of interpersonal relations, and contribute to the facilitation of assimilation processes. Mobile phones can play a role in facilitating the integration of refugees into a new environment. This study aims to evaluate the impact of mobile phone use on helping refugees navigate the challenges posed by cultural differences in the host country. Semi-structured interviews were employed to collect data for the study, involving a sample size of 27 participants. Participants in the study were refugees based in South Africa, and thematic analysis was the chosen method for data analysis. The research highlights the numerous challenges faced by refugees in their host nation, including a lack of local cultural skills, the separation of family and friends from their countries of origin, hurdles in acquiring legal documentation, and the complexities of assimilating into the unfamiliar community. The use of mobile phones by refugees comes with several advantages, such as the advancement of language and cultural understanding, seamless integration into the host country, streamlined communication, and the exploration of diverse opportunities. Concurrently, mobile phones allow refugees in South Africa to manage the impact of culture shock.

Keywords: Mobile phones, culture shock, refugees, South Africa.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 192
213 A Comparison of Experimental Data with Monte Carlo Calculations for Optimisation of the Sourceto- Detector Distance in Determining the Efficiency of a LaBr3:Ce (5%) Detector

Authors: H. Aldousari, T. Buchacher, N. M. Spyrou

Abstract:

Cerium-doped lanthanum bromide LaBr3:Ce(5%) crystals are considered to be one of the most advanced scintillator materials used in PET scanning, combining a high light yield, fast decay time and excellent energy resolution. Apart from the correct choice of scintillator, it is also important to optimise the detector geometry, not least in terms of source-to-detector distance in order to obtain reliable measurements and efficiency. In this study a commercially available 25 mm x 25 mm BrilLanCeTM 380 LaBr3: Ce (5%) detector was characterised in terms of its efficiency at varying source-to-detector distances. Gamma-ray spectra of 22Na, 60Co, and 137Cs were separately acquired at distances of 5, 10, 15, and 20cm. As a result of the change in solid angle subtended by the detector, the geometric efficiency reduced in efficiency with increasing distance. High efficiencies at low distances can cause pulse pile-up when subsequent photons are detected before previously detected events have decayed. To reduce this systematic error the source-to-detector distance should be balanced between efficiency and pulse pile-up suppression as otherwise pile-up corrections would need to be necessary at short distances. In addition to the experimental measurements Monte Carlo simulations have been carried out for the same setup, allowing a comparison of results. The advantages and disadvantages of each approach have been highlighted.

Keywords: BrilLanCeTM380 LaBr3:Ce(5%), Coincidence summing, GATE simulation, Geometric efficiency

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1889