Search results for: starting transient
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 643

Search results for: starting transient

73 Hash Based Block Matching for Digital Evidence Image Files from Forensic Software Tools

Authors: M. Kaya, M. Eris

Abstract:

Internet use, intelligent communication tools, and social media have all become an integral part of our daily life as a result of rapid developments in information technology. However, this widespread use increases crimes committed in the digital environment. Therefore, digital forensics, dealing with various crimes committed in digital environment, has become an important research topic. It is in the research scope of digital forensics to investigate digital evidences such as computer, cell phone, hard disk, DVD, etc. and to report whether it contains any crime related elements. There are many software and hardware tools developed for use in the digital evidence acquisition process. Today, the most widely used digital evidence investigation tools are based on the principle of finding all the data taken place in digital evidence that is matched with specified criteria and presenting it to the investigator (e.g. text files, files starting with letter A, etc.). Then, digital forensics experts carry out data analysis to figure out whether these data are related to a potential crime. Examination of a 1 TB hard disk may take hours or even days, depending on the expertise and experience of the examiner. In addition, it depends on examiner’s experience, and may change overall result involving in different cases overlooked. In this study, a hash-based matching and digital evidence evaluation method is proposed, and it is aimed to automatically classify the evidence containing criminal elements, thereby shortening the time of the digital evidence examination process and preventing human errors.

Keywords: Block matching, digital evidence, hash list.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1307
72 Stimulating Policy for Attracting Foreign Direct Investment in Georgia

Authors: G. Erkomaishvili, M. Kobalava, T. Lazariashvili, N. Damenia

Abstract:

Current state of foreign direct investment (FDI) in Georgia is analyzed and evaluated in the paper, the existing legislative background for regulating investments and stimulating policies to attract investments are shown. It is noted that in developing countries encouragement of investment activity, support and implementation are of the most important tasks, implying a consistent investment policy, investor-friendly tax regime and the legal system, reducing administrative barriers and restrictions, fare competitive conditions and business development infrastructure. The work deals with the determining factor of FDIs and the main directions of stimulation, as well as prospective industries where new investments are needed. Contributing and hindering factors and stimulating measures are analyzed. As a result of the research, the direct and indirect factors attracting FDI have been identified. Facilitating factors to FDI inflow are as follows: simplicity of starting business, geopolitical location, low taxes, access to credit, ease of ownership registration, natural resources, low burden of regulations, low level of corruption and low crime rates. Hindering factors to FDI inflow are as follows: small market, lack of policy for attracting investments, low qualification of the workforce (despite the large number of unemployed people it is difficult to find workers with necessary special skills and qualifications), high interest rates, instability of national currency exchange rate, presence of conflict zones within the country and so forth.

Keywords: Foreign direct investment, investment attracting policies, investor, reinvestment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 637
71 Identifying the Barriers behind the Lack of Six Sigma Use in Libyan Manufacturing Companies

Authors: Osama Elgadi, Martin Birkett, Wai Ming Cheung

Abstract:

This paper investigates the barriers behind the underutilisation of six sigma in Libyan manufacturing companies (LMCs). A mixed-method methodology is proposed, starting by conducting interviews to collect qualitative data followed by the development of a questionnaire to obtain quantitative data. The focus of this paper is on discussing the findings of the interview stage and how these can be used to further develop the questionnaire stage. The interview results showed that only four key barriers were highlighted as being encountered by LMCs. With a difference in terms of their significance, these factors were identified, and placed in descending order according to their importance, namely: “Lack of top management commitment”, “Lack of training”, “Lack of knowledge about six sigma”, and “Culture effect”. The findings also showed that some barriers which, were found in previous studies of six sigma implementation were not considered as barriers to LMCs but can, in fact, be considered as success factors or enablers for six sigma adoption. These factors were identified as: “sufficiency of time and financial resources”; “customers unsatisfied”; “good communication between all departments in the company”; “we are certain about its results and benefits to our company and unhappy with the current quality system”. These results suggest that LMCs face fewer barriers to adopting six sigma than many well-established global companies operating in other countries and could take advantage of these successful factors by developing and implementing a six sigma framework to improve their product quality and competitiveness.

Keywords: Six sigma, barriers, Libyan manufacturing companies, interview.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1693
70 A Reference Framework Integrating Lean and Green Principles within Supply Chain Management

Authors: M. Bortolini, E. Ferrari, F. G. Galizia, C. Mora

Abstract:

In the last decades, an increasing set of companies adopted lean philosophy to improve their productivity and efficiency promoting the so-called continuous improvement concept, reducing waste of time and cutting off no-value added activities. In parallel, increasing attention rises toward green practice and management through the spread of the green supply chain pattern, to minimise landfilled waste, drained wastewater and pollutant emissions. Starting from a review on contributions deepening lean and green principles applied to supply chain management, the most relevant drivers to measure the performance of industrial processes are pointed out. Specific attention is paid on the role of cost because it is of key importance and it crosses both lean and green principles. This analysis leads to figure out an original reference framework for integrating lean and green principles in designing and managing supply chains. The proposed framework supports the application, to the whole value chain or to parts of it, e.g. distribution network, assembly system, job-shop, storage system etc., of the lean-green integrated perspective. Evidences show that the combination of the lean and green practices lead to great results, higher than the sum of the performances from their separate application. Lean thinking has beneficial effects on green practices and, at the same time, methods allowing environmental savings generate positive effects on time reduction and process quality increase.

Keywords: Environmental sustainability, green supply chain, integrated framework, lean thinking, supply chain management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2727
69 Gender Differences in Negotiation: Considering the Usual Driving Forces?

Authors: Claude Alavoine, Ferkan Kaplanseren

Abstract:

Negotiation is a specific form of interaction based on communication in which the parties enter into deliberately, each with clear but different interests or goals and a mutual dependency towards a decision due to be taken at the end of the confrontation. Consequently, negotiation is a complex activity involving many different disciplines from the strategic aspects and the decision making process to the evaluation of alternatives or outcomes and the exchange of information. While gender differences can be considered as one of the most researched topic within negotiation studies, empirical works and theory present many conflicting evidences and results about the role of gender in the process or the outcome. Furthermore, little interest has been shown over gender differences in the definition of what is negotiation, its essence or fundamental elements. Or, as differences exist in practices, it might be essential to study if the starting point of these discrepancies does not come from different considerations about what is negotiation and what will encourage the participants in their strategic decisions. Some recent and promising experiments made with diverse groups show that male and female participants in a common and shared situation barely consider the same way the concepts of power, trust or stakes which are largely considered as the usual driving forces of any negotiation. Furthermore, results from Human Resource self-assessment tests display and confirm considerable differences between individuals regarding essential behavioral dimensions like capacity to improvise and to achieve, aptitude to conciliate or to compete and orientation towards power and group domination which are also part of negotiation skills. Our intention in this paper is to confront these dimensions with negotiation’s usual driving forces in order to build up new paths for further research.

Keywords: Gender, negotiation, personality, power, stakes, trust.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3259
68 Air Dispersion Model for Prediction Fugitive Landfill Gaseous Emission Impact in Ambient Atmosphere

Authors: Moustafa Osman Mohammed

Abstract:

This paper will explore formation of HCl aerosol at atmospheric boundary layers and encourages the uptake of environmental modeling systems (EMSs) as a practice evaluation of gaseous emissions (“framework measures”) from small and medium-sized enterprises (SMEs). The conceptual model predicts greenhouse gas emissions to ecological points beyond landfill site operations. It focuses on incorporation traditional knowledge into baseline information for both measurement data and the mathematical results, regarding parameters influence model variable inputs. The paper has simplified parameters of aerosol processes based on the more complex aerosol process computations. The simple model can be implemented to both Gaussian and Eulerian rural dispersion models. Aerosol processes considered in this study were (i) the coagulation of particles, (ii) the condensation and evaporation of organic vapors, and (iii) dry deposition. The chemical transformation of gas-phase compounds is taken into account photochemical formulation with exposure effects according to HCl concentrations as starting point of risk assessment. The discussion set out distinctly aspect of sustainability in reflection inputs, outputs, and modes of impact on the environment. Thereby, models incorporate abiotic and biotic species to broaden the scope of integration for both quantification impact and assessment risks. The later environmental obligations suggest either a recommendation or a decision of what is a legislative should be achieved for mitigation measures of landfill gas (LFG) ultimately.

Keywords: Air dispersion model, landfill management, spatial analysis, environmental impact and risk assessment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1497
67 Geosynthetic Reinforced Unpaved Road: Literature Study and Design Example

Authors: D. Jayalakshmi, S. Bhosale

Abstract:

This paper, in its first part, presents the state-of-the-art literature of design approaches for geosynthetic reinforced unpaved roads. The literature starting since 1970 and the critical appraisal of flexible pavement design by Giroud and Han (2004) and Jonathan Fannin (2006) is presented. The design example is illustrated for Indian conditions. The example emphasizes the results computed by Giroud and Han's (2004) design method with the Indian road congress guidelines by IRC SP 72 -2015. The input data considered are related to the subgrade soil condition of Maharashtra State in India. The unified soil classification of the subgrade soil is inorganic clay with high plasticity (CH), which is expansive with a California bearing ratio (CBR) of 2% to 3%. The example exhibits the unreinforced case and geotextile as reinforcement by varying the rut depth from 25 mm to 100 mm. The present result reveals the base thickness for the unreinforced case from the IRC design catalogs is in good agreement with Giroud and Han (2004) approach for a range of 75 mm to 100 mm rut depth. Since Giroud and Han (2004) method is applicable for both reinforced and unreinforced cases, for the same data with appropriate Nc factor, for the same rut depth, the base thickness for the reinforced case has arrived for the Indian condition. From this trial, for the CBR of 2%, the base thickness reduction due to geotextile inclusion is 35%. For the CBR range of 2% to 5% with different stiffness in geosynthetics, the reduction in base course thickness will be evaluated, and the validation will be executed by the full-scale accelerated pavement testing set up at the College of Engineering Pune (COE), India.

Keywords: Base thickness, design approach, equation, full scale accelerated pavement set up, Indian condition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 573
66 Forecasting Stock Price Manipulation in Capital Market

Authors: F. Rahnamay Roodposhti, M. Falah Shams, H. Kordlouie

Abstract:

The aim of the article is extending and developing econometrics and network structure based methods which are able to distinguish price manipulation in Tehran stock exchange. The principal goal of the present study is to offer model for approximating price manipulation in Tehran stock exchange. In order to do so by applying separation method a sample consisting of 397 companies accepted at Tehran stock exchange were selected and information related to their price and volume of trades during years 2001 until 2009 were collected and then through performing runs test, skewness test and duration correlative test the selected companies were divided into 2 sets of manipulated and non manipulated companies. In the next stage by investigating cumulative return process and volume of trades in manipulated companies, the date of starting price manipulation was specified and in this way the logit model, artificial neural network, multiple discriminant analysis and by using information related to size of company, clarity of information, ratio of P/E and liquidity of stock one year prior price manipulation; a model for forecasting price manipulation of stocks of companies present in Tehran stock exchange were designed. At the end the power of forecasting models were studied by using data of test set. Whereas the power of forecasting logit model for test set was 92.1%, for artificial neural network was 94.1% and multi audit analysis model was 90.2%; therefore all of the 3 aforesaid models has high power to forecast price manipulation and there is no considerable difference among forecasting power of these 3 models.

Keywords: Price Manipulation, Liquidity, Size of Company, Floating Stock, Information Clarity

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2791
65 Exploring Influence Range of Tainan City Using Electronic Toll Collection Big Data

Authors: Chen Chou, Feng-Tyan Lin

Abstract:

Big Data has been attracted a lot of attentions in many fields for analyzing research issues based on a large number of maternal data. Electronic Toll Collection (ETC) is one of Intelligent Transportation System (ITS) applications in Taiwan, used to record starting point, end point, distance and travel time of vehicle on the national freeway. This study, taking advantage of ETC big data, combined with urban planning theory, attempts to explore various phenomena of inter-city transportation activities. ETC, one of government's open data, is numerous, complete and quick-update. One may recall that living area has been delimited with location, population, area and subjective consciousness. However, these factors cannot appropriately reflect what people’s movement path is in daily life. In this study, the concept of "Living Area" is replaced by "Influence Range" to show dynamic and variation with time and purposes of activities. This study uses data mining with Python and Excel, and visualizes the number of trips with GIS to explore influence range of Tainan city and the purpose of trips, and discuss living area delimited in current. It dialogues between the concepts of "Central Place Theory" and "Living Area", presents the new point of view, integrates the application of big data, urban planning and transportation. The finding will be valuable for resource allocation and land apportionment of spatial planning.

Keywords: Big Data, ITS, influence range, living area, central place theory, visualization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 929
64 Efficiency of Robust Heuristic Gradient Based Enumerative and Tunneling Algorithms for Constrained Integer Programming Problems

Authors: Vijaya K. Srivastava, Davide Spinello

Abstract:

This paper presents performance of two robust gradient-based heuristic optimization procedures based on 3n enumeration and tunneling approach to seek global optimum of constrained integer problems. Both these procedures consist of two distinct phases for locating the global optimum of integer problems with a linear or non-linear objective function subject to linear or non-linear constraints. In both procedures, in the first phase, a local minimum of the function is found using the gradient approach coupled with hemstitching moves when a constraint is violated in order to return the search to the feasible region. In the second phase, in one optimization procedure, the second sub-procedure examines 3n integer combinations on the boundary and within hypercube volume encompassing the result neighboring the result from the first phase and in the second optimization procedure a tunneling function is constructed at the local minimum of the first phase so as to find another point on the other side of the barrier where the function value is approximately the same. In the next cycle, the search for the global optimum commences in both optimization procedures again using this new-found point as the starting vector. The search continues and repeated for various step sizes along the function gradient as well as that along the vector normal to the violated constraints until no improvement in optimum value is found. The results from both these proposed optimization methods are presented and compared with one provided by popular MS Excel solver that is provided within MS Office suite and other published results.

Keywords: Constrained integer problems, enumerative search algorithm, Heuristic algorithm, tunneling algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 759
63 Self-Tuning Power System Stabilizer Based on Recursive Least Square Identification and Linear Quadratic Regulator

Authors: J. Ritonja

Abstract:

Available commercial applications of power system stabilizers assure optimal damping of synchronous generator’s oscillations only in a small part of operating range. Parameters of the power system stabilizer are usually tuned for the selected operating point. Extensive variations of the synchronous generator’s operation result in changed dynamic characteristics. This is the reason that the power system stabilizer tuned for the nominal operating point does not satisfy preferred damping in the overall operation area. The small-signal stability and the transient stability of the synchronous generators have represented an attractive problem for testing different concepts of the modern control theory. Of all the methods, the adaptive control has proved to be the most suitable for the design of the power system stabilizers. The adaptive control has been used in order to assure the optimal damping through the entire synchronous generator’s operating range. The use of the adaptive control is possible because the loading variations and consequently the variations of the synchronous generator’s dynamic characteristics are, in most cases, essentially slower than the adaptation mechanism. The paper shows the development and the application of the self-tuning power system stabilizer based on recursive least square identification method and linear quadratic regulator. Identification method is used to calculate the parameters of the Heffron-Phillips model of the synchronous generator. On the basis of the calculated parameters of the synchronous generator’s mathematical model, the synthesis of the linear quadratic regulator is carried-out. The identification and the synthesis are implemented on-line. In this way, the self-tuning power system stabilizer adapts to the different operating conditions. A purpose of this paper is to contribute to development of the more effective power system stabilizers, which would replace currently used linear stabilizers. The presented self-tuning power system stabilizer makes the tuning of the controller parameters easier and assures damping improvement in the complete operating range. The results of simulations and experiments show essential improvement of the synchronous generator’s damping and power system stability.

Keywords: Adaptive control, linear quadratic regulator, power system stabilizer, recursive least square identification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1072
62 A Zero-Cost Collar Option Applied to Materials Procurement Contracts to Reduce Price Fluctuation Risks in Construction

Authors: H. L. Yim, S. H. Lee, S. K. Yoo, J. J. Kim

Abstract:

This study proposes a materials procurement contracts model to which the zero-cost collar option is applied for heading price fluctuation risks in construction.The material contract model based on the collar option that consists of the call option striking zone of the construction company(the buyer) following the materials price increase andthe put option striking zone of the material vendor(the supplier) following a materials price decrease. This study first determined the call option strike price Xc of the construction company by a simple approach: it uses the predicted profit at the project starting point and then determines the strike price of put option Xp that has an identical option value, which completes the zero-cost material contract.The analysis results indicate that the cost saving of the construction company increased as Xc decreased. This was because the critical level of the steel materials price increasewas set at a low level. However, as Xc decreased, Xpof a put option that had an identical option value gradually increased. Cost saving increased as Xc decreased. However, as Xp gradually increased, the risk of loss from a construction company increased as the steel materials price decreased. Meanwhile, cost saving did not occur for the construction company, because of volatility. This result originated in the zero-cost features of the two-way contract of the collar option. In the case of the regular one-way option, the transaction cost had to be subtracted from the cost saving. The transaction cost originated from an option value that fluctuated with the volatility. That is, the cost saving of the one-way option was affected by the volatility. Meanwhile, even though the collar option with zero transaction cost cut the connection between volatility and cost saving, there was a risk of exercising the put option.

Keywords: Construction materials, Supply chain management, Procurement, Payment, Collar option

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2475
61 Efficient Compact Micro DBD Plasma Reactor for Ozone Generation for Industrial Application in Liquid and Gas Phase Systems

Authors: Kuvshinov, D., Siswanto, A., Lozano-Parada, J., Zimmerman, W. B.

Abstract:

Ozone is well known as a powerful, fast reacting oxidant. Ozone based processes produce no by-product residual as non-reacted ozone decomposes to molecular oxygen. Therefore an application of ozone is widely accepted as one of the main approaches for a Sustainable and Clean Technologies development.

There are number of technologies which require ozone to be delivered to specific points of a production network or reactors construction. Due to space constraints, high reactivity and short life time of ozone the use of ozone generators even of a bench top scale is practically limited. This requires development of mini/micro scale ozone generator which can be directly incorporated into production units.

Our report presents a feasibility study of a new micro scale rector for ozone generation (MROG). Data on MROG calibration and indigo decomposition at different operation conditions are presented.

At selected operation conditions with residence time of 0.25 s the process of ozone generation is not limited by reaction rate and the amount of ozone produced is a function of power applied. It was shown that the MROG is capable to produce ozone at voltage level starting from 3.5kV with ozone concentration of 5.28*10-6 (mol/L) at 5kV. This is in line with data presented on numerical investigation for a MROG. It was shown that in compare to a conventional ozone generator, MROG has lower power consumption at low voltages and atmospheric pressure.

The MROG construction makes it applicable for both submerged and dry systems. With a robust compact design MROG can be used as an integrated module for production lines of high complexity.

Keywords: DBD, micro reactor, ozone, plasma.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2962
60 Lighting Consumption Analysis in Retail Industry: Comparative Study

Authors: Elena C. Tamaş, Grațiela M. Țârlea, Gianni Flamaropol, Dragoș Hera

Abstract:

This article is referring to a comparative study regarding the electrical energy consumption for lighting on diverse types of big sizes commercial buildings built in Romania after 2007, having 3, 4, 5 versus 8, 9, 10 operational years. Some buildings have installed building management systems (BMS) to monitor also the lighting performances starting with the opening days till the present days but some have chosen only local meters to implement. Firstly, for each analyzed building, the total required energy power and the energy power consumption for lighting were calculated depending on the lamps number, the unit power and the average daily running hours. All objects and installations were chosen depending on the destination/location of the lighting (exterior parking or access, interior or covering parking, building interior and building perimeter). Secondly, to all lighting objects and installations, mechanical counters were installed, and to the ones linked to BMS there were installed the digital meters as well for a better monitoring. Some efficient solutions are proposed to improve the power consumption, for example the 1/3 lighting functioning for the covered and exterior parking lighting to those buildings if can be done. This type of lighting share can be performed on each level, especially on the night shifts. Another example is to use the dimmers to reduce the light level, depending on the executed work in the respective area, and a 30% power energy saving can be achieved. Using the right BMS to monitor, the energy consumption depending on the average operational daily hours and changing the non-performant unit lights with the ones having LED technology or economical ones might increase significantly the energy performances and reduce the energy consumption of the buildings.

Keywords: Lighting consumption, commercial buildings, maintenance, energy performances.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 928
59 Improved Dynamic Bayesian Networks Applied to Arabic on Line Characters Recognition

Authors: Redouane Tlemsani, Abdelkader Benyettou

Abstract:

Work is in on line Arabic character recognition and the principal motivation is to study the Arab manuscript with on line technology.

This system is a Markovian system, which one can see as like a Dynamic Bayesian Network (DBN). One of the major interests of these systems resides in the complete models training (topology and parameters) starting from training data.

Our approach is based on the dynamic Bayesian Networks formalism. The DBNs theory is a Bayesians networks generalization to the dynamic processes. Among our objective, amounts finding better parameters, which represent the links (dependences) between dynamic network variables.

In applications in pattern recognition, one will carry out the fixing of the structure, which obliges us to admit some strong assumptions (for example independence between some variables). Our application will relate to the Arabic isolated characters on line recognition using our laboratory database: NOUN. A neural tester proposed for DBN external optimization.

The DBN scores and DBN mixed are respectively 70.24% and 62.50%, which lets predict their further development; other approaches taking account time were considered and implemented until obtaining a significant recognition rate 94.79%.

Keywords: Arabic on line character recognition, dynamic Bayesian network, pattern recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1689
58 The Study on the Conversed Remediation between Old and New Media in Case of Smart Phone and PC in South Korea

Authors: Jinhwan Yu, Jooyeon Yook

Abstract:

After Apple's first introduction its smart phone, iPhone in the end of 2009 in Korea, the number of Korean smarphone users had been rapidly increasing so that the half of Korean population became smart phone users as of February, 2012. Currently, smart phones are positioned as a major digital media with powerful influences in Korea. And, now, Koreans are leaning new information, enjoying games and communicating other people every time and everywhere. As smart phone devices' performances increased, the number of usable services became more while adequate GUI developments are required to implement various functions with smart phones. The strategy to provide similar experiences on smart phones through familiar features based on employment of existing media's functions mostly contributed to smart phones' popularization in connection with smart phone devices' iconic GUIs. The spread of Smart phone increased mobile web accesses. Therefore, the attempts to implement PC's web in the smart phone's web are continuously made. The mobile web GUI provides familiar experiences to users through designs adequately utilizing the smart phone's GUIs. As the number of users familiarized to smart phones and mobile web GUIs, opposite to reversed remediation from many parts of PCs, PCs are starting to adapt smart phone GUIs. This study defines this phenomenon as the reversed remediation, and reviews the reversed remediation cases of Smart phone GUI' characteristics of PCs. For this purpose, the established study issues are as under: · what is the reversed remediation? · what are the smart phone GUI's characteristics? · what kind of interrelationship exist s between the smart phone and PC's web site? It is meaningful in the forecast of the future GUI's change by understanding of characteristics in the paradigm changes of PC and smart phone's GUI designs. This also will be helpful to establish strategies for digital devices' development and design.

Keywords: Graphic User Interface, Remediation, Smart Phone, South Korea, Web Site

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1506
57 Loading and Unloading Scheduling Problem in a Multiple-Multiple Logistics Network: Modeling and Solving

Authors: Yasin Tadayonrad, Alassane Ballé Ndiaye

Abstract:

Most of the supply chain networks have many nodes starting from the suppliers’ side up to the customers’ side that each node sends/receives the raw materials/products from/to the other nodes. One of the major concerns in this kind of supply chain network is finding the best schedule for loading/unloading the shipments through the whole network by which all the constraints in the source and destination nodes are met and all the shipments are delivered on time. One of the main constraints in this problem is the loading/unloading capacity in each source/destination node at each time slot (e.g., per week/day/hour). Because of the different characteristics of different products/groups of products, the capacity of each node might differ based on each group of products. In most supply chain networks (especially in the Fast-moving consumer goods (FMCG) industry), there are different planners/planning teams working separately in different nodes to determine the loading/unloading timeslots in source/destination nodes to send/receive the shipments. In this paper, a mathematical problem has been proposed to find the best timeslots for loading/unloading the shipments minimizing the overall delays subject to respecting the capacity of loading/unloading of each node, the required delivery date of each shipment (considering the lead-times), and working-days of each node. This model was implemented on Python and solved using Python-MIP on a sample data set. Finally, the idea of a heuristic algorithm has been proposed as a way of improving the solution method that helps to implement the model on larger data sets in real business cases, including more nodes and shipments.

Keywords: Supply chain management, transportation, multiple-multiple network, timeslots management, mathematical modeling, mixed integer programming.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 440
56 Philosophy, Geometry, and Purpose in Islamic and Gothic Architecture as Two Religious-Based Styles

Authors: P. Nafisi Poor, P. Javid

Abstract:

Religion and divinity have always held important meaning to humans, and therefore it affects different aspects of life including art and architecture. Numerous works of art are related to religion whether supporting or denying it. Religion and religious scholars have influenced and changed art throughout history. This paper focuses on Islam and Christianity because these two religions have been the most discussed and most popular of all time, starting from the birth of Jesus to the arrival of Mohammad. Based on this popularity, these religions have influenced the arts and especially architecture. Islam on one hand changed Iranian and Arabian architecture and they applied it in different places around the world. From the appearance of Islam at 622 AD to this day, Islamic architecture has been evolving; however, one of the most important periods for this style was between 1501 AD and 1736 AD in Iran. Christianity, on the other hand, changed European architecture especially between 1150 AD and 1450 AD or the so-called "Gothic" era, which begins at medieval time and reaches its peak at International Gothic ages. At both of these periods, designing buildings based on spiritual concepts and divine statements reached its peak, and architects were considering God and religion as their center of attention. This article studies the focus on the religions of Islam and Christianity in terms of architecture and presents a general philosophy of both styles to comprehend the idea behind each one, followed by an analysis of their geometry and architectural aspects derived from the best examples, all to understand the purpose of each style and to realize, which one was more successful in reaching their purpose. Subsequently, a comprehensive review of each building is provided including 3D visualizations to help achieve the goal of the article. These studies can support diverse inquiries about both Islamic and Gothic architecture and can be used as a resource to support studies and research towards designing based on religion or for divine purposes.

Keywords: Architecture, gothic, Islamic, religion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 890
55 Analysis of Noise Level Effects on Signal-Averaged Electrocardiograms

Authors: Chun-Cheng Lin

Abstract:

Noise level has critical effects on the diagnostic performance of signal-averaged electrocardiogram (SAECG), because the true starting and end points of QRS complex would be masked by the residual noise and sensitive to the noise level. Several studies and commercial machines have used a fixed number of heart beats (typically between 200 to 600 beats) or set a predefined noise level (typically between 0.3 to 1.0 μV) in each X, Y and Z lead to perform SAECG analysis. However different criteria or methods used to perform SAECG would cause the discrepancies of the noise levels among study subjects. According to the recommendations of 1991 ESC, AHA and ACC Task Force Consensus Document for the use of SAECG, the determinations of onset and offset are related closely to the mean and standard deviation of noise sample. Hence this study would try to perform SAECG using consistent root-mean-square (RMS) noise levels among study subjects and analyze the noise level effects on SAECG. This study would also evaluate the differences between normal subjects and chronic renal failure (CRF) patients in the time-domain SAECG parameters. The study subjects were composed of 50 normal Taiwanese and 20 CRF patients. During the signal-averaged processing, different RMS noise levels were adjusted to evaluate their effects on three time domain parameters (1) filtered total QRS duration (fQRSD), (2) RMS voltage of the last QRS 40 ms (RMS40), and (3) duration of the low amplitude signals below 40 μV (LAS40). The study results demonstrated that the reduction of RMS noise level can increase fQRSD and LAS40 and decrease the RMS40, and can further increase the differences of fQRSD and RMS40 between normal subjects and CRF patients. The SAECG may also become abnormal due to the reduction of RMS noise level. In conclusion, it is essential to establish diagnostic criteria of SAECG using consistent RMS noise levels for the reduction of the noise level effects.

Keywords: Signal-averaged electrocardiogram, Ventricular latepotentials, Chronic renal failure, Noise level effects.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1749
54 Blueprinting of a Normalized Supply Chain Processes: Results in Implementing Normalized Software Systems

Authors: Bassam Istanbouli

Abstract:

With the technology evolving every day and with the increase in global competition, industries are always under the pressure to be the best. They need to provide good quality products at competitive prices, when and how the customer wants them.  In order to achieve this level of service, products and their respective supply chain processes need to be flexible and evolvable; otherwise changes will be extremely expensive, slow and with many combinatorial effects. Those combinatorial effects impact the whole organizational structure, from a management, financial, documentation, logistics and specially the information system Enterprise Requirement Planning (ERP) perspective. By applying the normalized system concept/theory to segments of the supply chain, we believe minimal effects, especially at the time of launching an organization global software project. The purpose of this paper is to point out that if an organization wants to develop a software from scratch or implement an existing ERP software for their business needs and if their business processes are normalized and modular then most probably this will yield to a normalized and modular software system that can be easily modified when the business evolves. Another important goal of this paper is to increase the awareness regarding the design of the business processes in a software implementation project. If the blueprints created are normalized then the software developers and configurators will use those modular blueprints to map them into modular software. This paper only prepares the ground for further studies;  the above concept will be supported by going through the steps of developing, configuring and/or implementing a software system for an organization by using two methods: The Software Development Lifecycle method (SDLC) and the Accelerated SAP implementation method (ASAP). Both methods start with the customer requirements, then blue printing of its business processes and finally mapping those processes into a software system.  Since those requirements and processes are the starting point of the implementation process, then normalizing those processes will end up in a normalizing software.

Keywords: Blueprint, ERP, SDLC, Modular.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 339
53 Exploration of Influential Factors on First Year Architecture Students’ Productivity

Authors: Shima Nikanjam, Badiossadat Hassanpour, Adi Irfan Che Ani

Abstract:

The design process in architecture education is based upon the Learning-by-Doing method, which leads students to understand how to design by practicing rather than studying. First-year design studios, as starting educational stage, provide integrated knowledge and skills of design for newly jointed architecture students. Within the basic design studio environment, students are guided to transfer their abstract thoughts into visual concrete decisions under the supervision of design educators for the first time. Therefore, introductory design studios have predominant impacts on students’ operational thinking and designing. Architectural design thinking is quite different from students’ educational backgrounds and learning habits. This educational challenge at basic design studios creates a severe need to study the reality of design education at foundation year and define appropriate educational methods with convenient project types with the intention of enhancing architecture education quality. Material for this study has been gathered through long-term direct observation at a first year second semester design studio at the faculty of architecture at EMU (known as FARC 102), fall and spring academic semester 2014-15. Distribution of a questionnaire among case study students and interviews with third and fourth design studio students who passed through the same methods of education in the past 2 years and conducting interviews with instructors are other methodologies used in this research. The results of this study reveal a risk of a mismatch between the implemented teaching method, project type and scale in this particular level and students’ learning styles. Although the existence of such risk due to varieties in students’ profiles could be expected to some extent, recommendations can support educators to reach maximum compatibility.

Keywords: Architecture education, basic design studio, educational method, forms creation skill.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1570
52 Computer Aided Design Solution Based on Genetic Algorithms for FMEA and Control Plan in Automotive Industry

Authors: Nadia Belu, Laurentiu M. Ionescu, Agnieszka Misztal

Abstract:

In this paper we propose a computer-aided solution with Genetic Algorithms in order to reduce the drafting of reports: FMEA analysis and Control Plan required in the manufacture of the product launch and improved knowledge development teams for future projects. The solution allows to the design team to introduce data entry required to FMEA. The actual analysis is performed using Genetic Algorithms to find optimum between RPN risk factor and cost of production. A feature of Genetic Algorithms is that they are used as a means of finding solutions for multi criteria optimization problems. In our case, along with three specific FMEA risk factors is considered and reduce production cost. Analysis tool will generate final reports for all FMEA processes. The data obtained in FMEA reports are automatically integrated with other entered parameters in Control Plan. Implementation of the solution is in the form of an application running in an intranet on two servers: one containing analysis and plan generation engine and the other containing the database where the initial parameters and results are stored. The results can then be used as starting solutions in the synthesis of other projects. The solution was applied to welding processes, laser cutting and bending to manufacture chassis for buses. Advantages of the solution are efficient elaboration of documents in the current project by automatically generating reports FMEA and Control Plan using multiple criteria optimization of production and build a solid knowledge base for future projects. The solution which we propose is a cheap alternative to other solutions on the market using Open Source tools in implementation.

Keywords: Automotive industry, control plan, FMEA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2831
51 Nebulized Magnesium Sulfate in Acute Moderate to Severe Asthma in Pediatric Patients

Authors: Lubna M. Zakaryia Mahmoud, Mohammed A. Dawood, Doaa A. Heiba

Abstract:

A prospective double-blind placebo controlled trial carried out on 60 children known to be asthmatic who presented to the emergency department at Alexandria University of Children’s Hospital at El-Shatby with acute asthma exacerbations to assess the efficacy of adding inhaled magnesium sulfate to β-agonist, compared with β-agonist in saline, in the management of acute asthma exacerbations in children. The participants in the study were divided in two groups; Group A (study group) received inhaled salbutamol solution (0.15 ml/kg) plus isotonic magnesium sulfate 2 ml in a nebulizer chamber. Group B (control group): received nebulized salbutamol solution (0.15 ml/kg) diluted with placebo (2 ml normal saline). Both groups received inhaled solution every 20 minutes that was repeated for three doses. They were evaluated using the Pediatric Asthma Severity Score (PASS), oxygen saturation using portable pulse oximetry and peak expiratory flow rate using a portable peak expiratory flow meter at initially recorded as zero-minute assessment and every 20 minutes from the end of each nebulization (nebulization lasts 5-10 minutes) recorded as 20, 40 and 60-minute assessments. Regarding PASS, comparison showed non-significant difference with p-value 0.463, 0.472, 0.0766 at 20, 40 and 60 minutes. Regarding oxygen saturation, improvement was more significant towards group A starting from 40 min with significant p-value=0.000. At 60 min p-value=0.000. Although mean PEFR significantly improved from zero-min in both groups; however, improvement was more significant in group A with significant p-value = 0.015, 0.001, 0.001 at 20 min, 40 min and 60 min, respectively. The conclusion this study suggests is that inhaled magnesium sulfate is an efficient add on drug to standard β- agonist inhalation used in the treatment of moderate to severe asthma exacerbations.

Keywords: Nebulized, magnesium sulfate, acute asthma, pediatric.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1615
50 A Decision Support Tool for Evaluating Mobility Projects

Authors: H. Omrani, P. Gerber

Abstract:

Success is a European project that will implement several clean transport offers in three European cities and evaluate the environmental impacts. The goal of these measures is to improve urban mobility or the displacement of residents inside cities. For e.g. park and ride, electric vehicles, hybrid bus and bike sharing etc. A list of 28 criteria and 60 measures has been established for evaluation of these transport projects. The evaluation criteria can be grouped into: Transport, environment, social, economic and fuel consumption. This article proposes a decision support system based that encapsulates a hybrid approach based on fuzzy logic, multicriteria analysis and belief theory for the evaluation of impacts of urban mobility solutions. A web-based tool called DeSSIA (Decision Support System for Impacts Assessment) has been developed that treats complex data. The tool has several functionalities starting from data integration (import of data), evaluation of projects and finishes by graphical display of results. The tool development is based on the concept of MVC (Model, View, and Controller). The MVC is a conception model adapted to the creation of software's which impose separation between data, their treatment and presentation. Effort is laid on the ergonomic aspects of the application. It has codes compatible with the latest norms (XHTML, CSS) and has been validated by W3C (World Wide Web Consortium). The main ergonomic aspect focuses on the usability of the application, ease of learning and adoption. By the usage of technologies such as AJAX (XML and Java Script asynchrones), the application is more rapid and convivial. The positive points of our approach are that it treats heterogeneous data (qualitative, quantitative) from various information sources (human experts, survey, sensors, model etc.).

Keywords: Decision support tool, hybrid approach, urban mobility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1940
49 The Influence of Travel Experience within Perceived Public Transport Quality

Authors: Armando Cartenì, Ilaria Henke

Abstract:

The perceived public transport quality is an important driver that influences both customer satisfaction and mobility choices. The competition among transport operators needs to improve the quality of the services and identify which attributes are perceived as relevant by passengers. Among the “traditional” public transport quality attributes there are, for example: travel and waiting time, regularity of the services, and ticket price. By contrast, there are some “non-conventional” attributes that could significantly influence customer satisfaction jointly with the “traditional” ones. Among these, the beauty/aesthetics of the transport terminals (e.g. rail station and bus terminal) is probably one of the most impacting on user perception. Starting from these considerations, the point stressed in this paper was if (and how munch) the travel experience of the overall travel (e.g. how long is the travel, how many transport modes must be used) influences the perception of the public transport quality. The aim of this paper was to investigate the weight of the terminal quality (e.g. aesthetic, comfort and service offered) within the overall travel experience. The case study was the extra-urban Italian bus network. The passengers of the major Italian terminal bus were interviewed and the analysis of the results shows that about the 75% of the travelers, are available to pay up to 30% more for the ticket price for having a high quality terminal. A travel experience effect was observed: the average perceived transport quality varies with the characteristic of the overall trip. The passengers that have a “long trip” (travel time greater than 2 hours) perceived as “low” the overall quality of the trip even if they pass through a high quality terminal. The opposite occurs for the “short trip” passengers. This means that if a traveler passes through a high quality station, the overall perception of that terminal could be significantly reduced if he is tired from a long trip. This result is important and if confirmed through other case studies, will allow to conclude that the “travel experience impact" must be considered as an explicit design variable for public transport services and planning.

Keywords: Transportation planning, sustainable mobility, decision support system, discrete choice model, design problem.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1122
48 Localized and Time-Resolved Velocity Measurements of Pulsatile Flow in a Rectangular Channel

Authors: R. Blythman, N. Jeffers, T. Persoons, D. B. Murray

Abstract:

The exploitation of flow pulsation in micro- and mini-channels is a potentially useful technique for enhancing cooling of high-end photonics and electronics systems. It is thought that pulsation alters the thickness of the hydrodynamic and thermal boundary layers, and hence affects the overall thermal resistance of the heat sink. Although the fluid mechanics and heat transfer are inextricably linked, it can be useful to decouple the parameters to better understand the mechanisms underlying any heat transfer enhancement. Using two-dimensional, two-component particle image velocimetry, the current work intends to characterize the heat transfer mechanisms in pulsating flow with a mean Reynolds number of 48 by experimentally quantifying the hydrodynamics of a generic liquid-cooled channel geometry. Flows circulated through the test section by a gear pump are modulated using a controller to achieve sinusoidal flow pulsations with Womersley numbers of 7.45 and 2.36 and an amplitude ratio of 0.75. It is found that the transient characteristics of the measured velocity profiles are dependent on the speed of oscillation, in accordance with the analytical solution for flow in a rectangular channel. A large velocity overshoot is observed close to the wall at high frequencies, resulting from the interaction of near-wall viscous stresses and inertial effects of the main fluid body. The steep velocity gradients at the wall are indicative of augmented heat transfer, although the local flow reversal may reduce the upstream temperature difference in heat transfer applications. While unsteady effects remain evident at the lower frequency, the annular effect subsides and retreats from the wall. The shear rate at the wall is increased during the accelerating half-cycle and decreased during deceleration compared to steady flow, suggesting that the flow may experience both enhanced and diminished heat transfer during a single period. Hence, the thickness of the hydrodynamic boundary layer is reduced for positively moving flow during one half of the pulsation cycle at the investigated frequencies. It is expected that the size of the thermal boundary layer is similarly reduced during the cycle, leading to intervals of heat transfer enhancement.

Keywords: Heat transfer enhancement, particle image velocimetry, localized and time-resolved velocity, photonics and electronics cooling, pulsating flow, Richardson’s annular effect.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2255
47 Accumulation of Pollutants, Self-purification and Impact on Peripheral Urban Areas: A Case Study in Shantytowns in Argentina

Authors: N. Porzionato, M. Mantiñan, E. Bussi, S. Grinberg, R. Gutierrez, G. Curutchet

Abstract:

This work sets out to debate the tensions involved in the processes of contamination and self-purification in the urban space, particularly in the streams that run through the Buenos Aires metropolitan area. For much of their course, those streams are piped; their waters do not come into contact with the outdoors until they have reached deeply impoverished urban areas with high levels of environmental contamination. These are peripheral zones that, until thirty years ago, were marshlands and fields. They are now densely populated areas largely lacking in urban infrastructure. The Cárcova neighborhood, where this project is underway, is in the José León Suárez section of General San Martín county, Buenos Aires province. A stretch of José León Suarez canal crosses the neighborhood. Starting upstream, this canal carries pollutants due to the sewage and industrial waste released into it. Further downstream, in the neighborhood, domestic drainage is poured into the stream. In this paper, we formulate a hypothesis diametrical to the one that holds that these neighborhoods are the primary source of contamination, suggesting instead that in the stretch of the canal that runs through the neighborhood the stream’s waters are actually cleaned and the sediments accumulate pollutants. Indeed, the stretches of water that runs through these neighborhoods act as water processing plants for the metropolis. This project has studied the different organic-load polluting contributions to the water in a certain stretch of the canal, the reduction of that load over the course of the canal, and the incorporation of pollutants into the sediments. We have found that the surface water has considerable ability to self-purify, mostly due to processes of sedimentation and adsorption. The polluting load is accumulated in the sediments where that load stabilizes slowly by means of anaerobic processes. In this study, we also investigated the risks of sediment management and the use of the processes studied here in controlled conditions as tools of environmental restoration.

Keywords: Bioremediation, pollutants, sediments, urban streams.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2423
46 Transcriptional Evidence for the Involvement of MyD88 in Flagellin Recognition: Genomic Identification of Rock Bream MyD88 and Comparative Analysis

Authors: N. Umasuthan, S. D. N. K. Bathige, W. S. Thulasitha, I. Whang, J. Lee

Abstract:

The MyD88 is an evolutionarily conserved host-expressed adaptor protein that is essential for proper TLR/ IL1R immune-response signaling. A previously identified complete cDNA (1626 bp) of OfMyD88 comprised an ORF of 867 bp encoding a protein of 288 amino acids (32.9 kDa). The gDNA (3761 bp) of OfMyD88 revealed a quinquepartite genome organization composed of 5 exons (with the sizes of 310, 132, 178, 92 and 155 bp) separated by 4 introns. All the introns displayed splice signals consistent with the consensus GT/AG rule. A bipartite domain structure with two domains namely death domain (24-103) coded by 1st exon, and TIR domain (151-288) coded by last 3 exons were identified through in silico analysis. Moreover, homology modeling of these two domains revealed a similar quaternary folding nature between human and rock bream homologs. A comprehensive comparison of vertebrate MyD88 genes showed that they possess a 5-exonic structure.In this structure, the last three exons were strongly conserved, and this suggests that a rigid structure has been maintained during vertebrate evolution.A cluster of TATA box-like sequences were found 0.25 kb upstream of cDNA starting position. In addition, putative 5'-flanking region of OfMyD88 was predicted to have TFBS implicated with TLR signaling, including copies of NFkB1, APRF/ STAT3, Sp1, IRF1 and 2 and Stat1/2. Using qPCR technique, a ubiquitous mRNA expression was detected in liver and blood. Furthermore, a significantly up-regulated transcriptional expression of OfMyD88 was detected in head kidney (12-24 h; >2-fold), spleen (6 h; 1.5-fold), liver (3 h; 1.9-fold) and intestine (24 h; ~2-fold) post-Fla challenge. These data suggest a crucial role for MyD88 in antibacterial immunity of teleosts.

Keywords: MyD88, Innate immunity, Flagellin, Genomic analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1860
45 On the Need to have an Additional Methodology for the Psychological Product Measurement and Evaluation

Authors: Corneliu Sofronie, Roxana Zubcov

Abstract:

Cognitive Science appeared about 40 years ago, subsequent to the challenge of the Artificial Intelligence, as common territory for several scientific disciplines such as: IT, mathematics, psychology, neurology, philosophy, sociology, and linguistics. The new born science was justified by the complexity of the problems related to the human knowledge on one hand, and on the other by the fact that none of the above mentioned sciences could explain alone the mental phenomena. Based on the data supplied by the experimental sciences such as psychology or neurology, models of the human mind operation are built in the cognition science. These models are implemented in computer programs and/or electronic circuits (specific to the artificial intelligence) – cognitive systems – whose competences and performances are compared to the human ones, leading to the psychology and neurology data reinterpretation, respectively to the construction of new models. During these processes if psychology provides the experimental basis, philosophy and mathematics provides the abstraction level utterly necessary for the intermission of the mentioned sciences. The ongoing general problematic of the cognitive approach provides two important types of approach: the computational one, starting from the idea that the mental phenomenon can be reduced to 1 and 0 type calculus operations, and the connection one that considers the thinking products as being a result of the interaction between all the composing (included) systems. In the field of psychology measurements in the computational register use classical inquiries and psychometrical tests, generally based on calculus methods. Deeming things from both sides that are representing the cognitive science, we can notice a gap in psychological product measurement possibilities, regarded from the connectionist perspective, that requires the unitary understanding of the quality – quantity whole. In such approach measurement by calculus proves to be inefficient. Our researches, deployed for longer than 20 years, lead to the conclusion that measuring by forms properly fits to the connectionism laws and principles.

Keywords: complementary methodology, connection approach, networks without scaling, quantum psychology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3520
44 Improvement of the Q-System Using the Rock Engineering System: A Case Study of Water Conveyor Tunnel of Azad Dam

Authors: S. Golmohammadi, M. Noorian Bidgoli

Abstract:

Because the status and mechanical parameters of discontinuities in the rock mass are included in the calculations, various methods of rock engineering classification are often used as a starting point for the design of different types of structures. The Q-system is one of the most frequently used methods for stability analysis and determination of support systems of underground structures in rock, including tunnel. In this method, six main parameters of the rock mass, namely, the Rock Quality Designation (RQD), joint set number (Jn), joint roughness number (Jr), joint alteration number (Ja), joint water parameter (Jw) and Stress Reduction Factor (SRF) are required. In this regard, in order to achieve a reasonable and optimal design, identifying the effective parameters for the stability of the mentioned structures is one of the most important goals and the most necessary actions in rock engineering. Therefore, it is necessary to study the relationships between the parameters of a system and how they interact with each other and, ultimately, the whole system. In this research, it has been attempted to determine the most effective parameters (key parameters) from the six parameters of rock mass in the Q-system using the Rock Engineering System (RES) method to improve the relationships between the parameters in the calculation of the Q value. The RES system is, in fact, a method by which one can determine the degree of cause and effect of a system's parameters by making an interaction matrix. In this research, the geomechanical data collected from the water conveyor tunnel of Azad Dam were used to make the interaction matrix of the Q-system. For this purpose, instead of using the conventional methods that are always accompanied by defects such as uncertainty, the Q-system interaction matrix is coded using a technique that is actually a statistical analysis of the data and determining the correlation coefficient between them. So, the effect of each parameter on the system is evaluated with greater certainty. The results of this study show that the formed interaction matrix provides a reasonable estimate of the effective parameters in the Q-system. Among the six parameters of the Q-system, the SRF and Jr parameters have the maximum and minimum impact on the system, respectively, and also the RQD and Jw parameters have the maximum and minimum impact on the system, respectively. Therefore, by developing this method, we can obtain a more accurate relation to the rock mass classification by weighting the required parameters in the Q-system.

Keywords: Q-system, Rock Engineering System, statistical analysis, rock mass, tunnel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 212