Search results for: Facility Location Selection
284 Experimental Evaluation of Drilling Damage on the Strength of Cores Extracted from RC Buildings
Authors: A. Masi, A. Digrisolo, G. Santarsiero
Abstract:
Concrete strength evaluated from compression tests on cores is affected by several factors causing differences from the in-situ strength at the location from which the core specimen was extracted. Among the factors, there is the damage possibly occurring during the drilling phase that generally leads to underestimate the actual in-situ strength. In order to quantify this effect, in this study two wide datasets have been examined, including: (i) about 500 core specimens extracted from Reinforced Concrete existing structures, and (ii) about 600 cube specimens taken during the construction of new structures in the framework of routine acceptance control. The two experimental datasets have been compared in terms of compression strength and specific weight values, accounting for the main factors affecting a concrete property, that is type and amount of cement, aggregates' grading, type and maximum size of aggregates, water/cement ratio, placing and curing modality, concrete age. The results show that the magnitude of the strength reduction due to drilling damage is strongly affected by the actual properties of concrete, being inversely proportional to its strength. Therefore, the application of a single value of the correction coefficient, as generally suggested in the technical literature and in structural codes, appears inappropriate. A set of values of the drilling damage coefficient is suggested as a function of the strength obtained from compressive tests on cores.
Keywords: RC Buildings, Assessment, In-situ concrete strength, Core testing, Drilling damage.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2060283 Studying the Structural Behaviour of RC Beams with Circular Openings of Different Sizes and Locations Using FE Method
Authors: Ali Shubbar, Hasanain Alwan, Ee Yu Phur, John McLoughlin, Ameer Al-khaykan
Abstract:
This paper aims to investigate the structural behaviour of RC beams with circular openings of different sizes and locations modelled using ABAQUS FEM software. Seven RC beams with the dimensions of 1200 mm×150 mm×150 mm were tested under three-point loading. Group A consists of three RC beams incorporating circular openings with diameters of 40 mm, 55 mm and 65 mm in the shear zone. However, Group B consists of three RC beams incorporating circular openings with diameters of 40 mm, 55 mm and 65 mm in the flexural zone. The final RC beam did not have any openings, to provide a control beam for comparison. The results show that increasing the diameter of the openings increases the maximum deflection and the ultimate failure load decreases relative to the control beam. In the shear zone, the presence of the openings caused an increase in the maximum deflection ranging between 4% and 22% and a decrease in the ultimate failure load of between 26% and 36% compared to the control beam. However, the presence of the openings in the flexural zone caused an increase in the maximum deflection of between 1.5% and 19.7% and a decrease in the ultimate failure load of between 6% and 13% relative to the control beam. In this study, the optimum location for placing circular openings was found to be in the flexural zone of the beam with a diameter of less than 30% of the depth of the beam.
Keywords: Ultimate failure load, maximum deflection, shear zone, flexural zone.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1611282 Energy Efficient Plant Design Approaches: Case Study of the Sample Building of the Energy Efficiency Training Facilities
Authors: Idil Kanter Otcu
Abstract:
Nowadays, due to the growing problems of energy supply and the drastic reduction of natural non-renewable resources, the development of new applications in the energy sector and steps towards greater efficiency in energy consumption are required. Since buildings account for a large share of energy consumption, increasing the structural density of buildings causes an increase in energy consumption. This increase in energy consumption means that energy efficiency approaches to building design and the integration of new systems using emerging technologies become necessary in order to curb this consumption. As new systems for productive usage of generated energy are developed, buildings that require less energy to operate, with rational use of resources, need to be developed. One solution for reducing the energy requirements of buildings is through landscape planning, design and application. Requirements such as heating, cooling and lighting can be met with lower energy consumption through planting design, which can help to achieve more efficient and rational use of resources. Within this context, rather than a planting design which considers only the ecological and aesthetic features of plants, these considerations should also extend to spatial organization whereby the relationship between the site and open spaces in the context of climatic elements and planting designs are taken into account. In this way, the planting design can serve an additional purpose. In this study, a landscape design which takes into consideration location, local climate morphology and solar angle will be illustrated on a sample building project.Keywords: Energy efficiency, landscape design, plant design, xeriscape landscape.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1804281 Ice Load Measurements on Known Structures Using Image Processing Methods
Authors: Azam Fazelpour, Saeed R. Dehghani, Vlastimil Masek, Yuri S. Muzychka
Abstract:
This study employs a method based on image analyses and structure information to detect accumulated ice on known structures. The icing of marine vessels and offshore structures causes significant reductions in their efficiency and creates unsafe working conditions. Image processing methods are used to measure ice loads automatically. Most image processing methods are developed based on captured image analyses. In this method, ice loads on structures are calculated by defining structure coordinates and processing captured images. A pyramidal structure is designed with nine cylindrical bars as the known structure of experimental setup. Unsymmetrical ice accumulated on the structure in a cold room represents the actual case of experiments. Camera intrinsic and extrinsic parameters are used to define structure coordinates in the image coordinate system according to the camera location and angle. The thresholding method is applied to capture images and detect iced structures in a binary image. The ice thickness of each element is calculated by combining the information from the binary image and the structure coordinate. Averaging ice diameters from different camera views obtains ice thicknesses of structure elements. Comparison between ice load measurements using this method and the actual ice loads shows positive correlations with an acceptable range of error. The method can be applied to complex structures defining structure and camera coordinates.
Keywords: Camera calibration, Ice detection, ice load measurements, image processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1258280 The Analysis of Secondary Case Studies as a Starting Point for Grounded Theory Studies: An Example from the Enterprise Software Industry
Authors: Abilio Avila, Orestis Terzidis
Abstract:
A fundamental principle of Grounded Theory (GT) is to prevent the formation of preconceived theories. This implies the need to start a research study with an open mind and to avoid being absorbed by the existing literature. However, to start a new study without an understanding of the research domain and its context can be extremely challenging. This paper presents a research approach that simultaneously supports a researcher to identify and to focus on critical areas of a research project and prevent the formation of prejudiced concepts by the current body of literature. This approach comprises of four stages: Selection of secondary case studies, analysis of secondary case studies, development of an initial conceptual framework, development of an initial interview guide. The analysis of secondary case studies as a starting point for a research project allows a researcher to create a first understanding of a research area based on real-world cases without being influenced by the existing body of theory. It enables a researcher to develop through a structured course of actions a firm guide that establishes a solid starting point for further investigations. Thus, the described approach may have significant implications for GT researchers who aim to start a study within a given research area.Keywords: Grounded theory, qualitative research, secondary case studies, secondary data analysis, interview guide.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1842279 Fuzzy Logic Approach to Robust Regression Models of Uncertain Medical Categories
Authors: Arkady Bolotin
Abstract:
Dichotomization of the outcome by a single cut-off point is an important part of various medical studies. Usually the relationship between the resulted dichotomized dependent variable and explanatory variables is analyzed with linear regression, probit regression or logistic regression. However, in many real-life situations, a certain cut-off point dividing the outcome into two groups is unknown and can be specified only approximately, i.e. surrounded by some (small) uncertainty. It means that in order to have any practical meaning the regression model must be robust to this uncertainty. In this paper, we show that neither the beta in the linear regression model, nor its significance level is robust to the small variations in the dichotomization cut-off point. As an alternative robust approach to the problem of uncertain medical categories, we propose to use the linear regression model with the fuzzy membership function as a dependent variable. This fuzzy membership function denotes to what degree the value of the underlying (continuous) outcome falls below or above the dichotomization cut-off point. In the paper, we demonstrate that the linear regression model of the fuzzy dependent variable can be insensitive against the uncertainty in the cut-off point location. In the paper we present the modeling results from the real study of low hemoglobin levels in infants. We systematically test the robustness of the binomial regression model and the linear regression model with the fuzzy dependent variable by changing the boundary for the category Anemia and show that the behavior of the latter model persists over a quite wide interval.
Keywords: Categorization, Uncertain medical categories, Binomial regression model, Fuzzy dependent variable, Robustness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1561278 Feasibility Study of a Solar Farm Project with an Executive Approach
Authors: Amir Reza Talaghat
Abstract:
Since 2015, a new approach and policy regarding energy resources protection and using renewable energies has been started in Iran which was developing new projects. Investigating about the feasibility study of these new projects helped to figure out five steps to prepare an executive feasibility study of the concerned projects, which are proper site selections, authorizations, design and simulation, economic study and programming, respectively. The results were interesting and essential for decision makers and investors to start implementing of these projects in reliable condition. The research is obtained through collection and study of the project's documents as well as recalculation to review conformity of the results with GIS data and the technical information of the bidders. In this paper, it is attempted to describe the result of the performed research by describing the five steps as an executive methodology, for preparing a feasible study of installing a 10 MW – solar farm project. The corresponding results of the research also help decision makers to start similar projects is explained in this paper as follows: selecting the best location for the concerned PV plant, reliable and safe conditions for investment and the required authorizations to start implementing the solar farm project in the concerned region, selecting suitable component to achieve the best possible performance for the plant, economic profit of the investment, proper programming to implement the project on time.
Keywords: Solar farm, solar energy, execution of PV power plant, PV power plant, feasibility study.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1742277 The Study of Rapeseed Characteristics by Factor Analysis under Normal and Drought Stress Conditions
Authors: Ali Bakhtiari Gharibdosti, Mohammad Hosein Bijeh Keshavarzi, Samira Alijani
Abstract:
To understand internal characteristics relationships and determine factors which explain under consideration characteristics in rapeseed varieties, 10 rapeseed genotypes were implemented in complete accidental plot with three-time repetitions under drought stress in 2009-2010 in research field of agriculture college, Islamic Azad University, Karaj branch. In this research, 11 characteristics include of characteristics related to growth, production and functions stages was considered. Variance analysis results showed that there is a significant difference among rapeseed varieties characteristics. By calculating simple correlation coefficient under both conditions, normal and drought stress indicate that seed function characteristics in plant and pod number have positive and significant correlation in 1% probable level with seed function and selection on the base of these characteristics was effective for improving this function. Under normal and drought stress, analyzing the main factors showed that numbers of factors which have more than one amount, had five factors under normal conditions which were 82.72% of total variance totally, but under drought stress four factors diagnosed which were 76.78% of total variance. By considering total results of this research and by assessing effective characteristics for factor analysis and selecting different components of these characteristics, they can be used for modifying works to select applicable and tolerant genotypes in drought stress conditions.Keywords: Correlation, drought stress, factor analysis, rapeseed.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 701276 Automatic Detection of Breast Tumors in Sonoelastographic Images Using DWT
Authors: A. Sindhuja, V. Sadasivam
Abstract:
Breast Cancer is the most common malignancy in women and the second leading cause of death for women all over the world. Earlier the detection of cancer, better the treatment. The diagnosis and treatment of the cancer rely on segmentation of Sonoelastographic images. Texture features has not considered for Sonoelastographic segmentation. Sonoelastographic images of 15 patients containing both benign and malignant tumorsare considered for experimentation.The images are enhanced to remove noise in order to improve contrast and emphasize tumor boundary. It is then decomposed into sub-bands using single level Daubechies wavelets varying from single co-efficient to six coefficients. The Grey Level Co-occurrence Matrix (GLCM), Local Binary Pattern (LBP) features are extracted and then selected by ranking it using Sequential Floating Forward Selection (SFFS) technique from each sub-band. The resultant images undergo K-Means clustering and then few post-processing steps to remove the false spots. The tumor boundary is detected from the segmented image. It is proposed that Local Binary Pattern (LBP) from the vertical coefficients of Daubechies wavelet with two coefficients is best suited for segmentation of Sonoelastographic breast images among the wavelet members using one to six coefficients for decomposition. The results are also quantified with the help of an expert radiologist. The proposed work can be used for further diagnostic process to decide if the segmented tumor is benign or malignant.
Keywords: Breast Cancer, Segmentation, Sonoelastography, Tumor Detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2207275 Latent Factors of Severity in Truck-Involved and Non-Truck-Involved Crashes on Freeways
Authors: Shin-Hyung Cho, Dong-Kyu Kim, Seung-Young Kho
Abstract:
Truck-involved crashes have higher crash severity than non-truck-involved crashes. There have been many studies about the frequency of crashes and the development of severity models, but those studies only analyzed the relationship between observed variables. To identify why more people are injured or killed when trucks are involved in the crash, we must examine to quantify the complex causal relationship between severity of the crash and risk factors by adopting the latent factors of crashes. The aim of this study was to develop a structural equation or model based on truck-involved and non-truck-involved crashes, including five latent variables, i.e. a crash factor, environmental factor, road factor, driver’s factor, and severity factor. To clarify the unique characteristics of truck-involved crashes compared to non-truck-involved crashes, a confirmatory analysis method was used. To develop the model, we extracted crash data from 10,083 crashes on Korean freeways from 2008 through 2014. The results showed that the most significant variable affecting the severity of a crash is the crash factor, which can be expressed by the location, cause, and type of the crash. For non-truck-involved crashes, the crash and environment factors increase severity of the crash; conversely, the road and driver factors tend to reduce severity of the crash. For truck-involved crashes, the driver factor has a significant effect on severity of the crash although its effect is slightly less than the crash factor. The multiple group analysis employed to analyze the differences between the heterogeneous groups of drivers.
Keywords: Crash severity, structural equation modeling, truck-involved crashes, multiple group analysis, crash on freeway.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1342274 3D Numerical Studies on Jets Acoustic Characteristics of Chevron Nozzles for Aerospace Applications
Authors: R. Kanmaniraja, R. Freshipali, J. Abdullah, K. Niranjan, K. Balasubramani, V. R. Sanal Kumar
Abstract:
The present environmental issues have made aircraft jet noise reduction a crucial problem in aero-acoustics research. Acoustic studies reveal that addition of chevrons to the nozzle reduces the sound pressure level reasonably with acceptable reduction in performance. In this paper comprehensive numerical studies on acoustic characteristics of different types of chevron nozzles have been carried out with non-reacting flows for the shape optimization of chevrons in supersonic nozzles for aerospace applications. The numerical studies have been carried out using a validated steady 3D density based, k-ε turbulence model. In this paper chevron with sharp edge, flat edge, round edge and U-type edge are selected for the jet acoustic characterization of supersonic nozzles. We observed that compared to the base model a case with round-shaped chevron nozzle could reduce 4.13% acoustic level with 0.6% thrust loss. We concluded that the prudent selection of the chevron shape will enable an appreciable reduction of the aircraft jet noise without compromising its overall performance. It is evident from the present numerical simulations that k-ε model can predict reasonably well the acoustic level of chevron supersonic nozzles for its shape optimization.
Keywords: Supersonic nozzle, Chevron, Acoustic level, Shape Optimization of Chevron Nozzles, Jet noise suppression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3821273 Hybrid Weighted Multiple Attribute Decision Making Handover Method for Heterogeneous Networks
Authors: Mohanad Alhabo, Li Zhang, Naveed Nawaz
Abstract:
Small cell deployment in 5G networks is a promising technology to enhance the capacity and coverage. However, unplanned deployment may cause high interference levels and high number of unnecessary handovers, which in turn result in an increase in the signalling overhead. To guarantee service continuity, minimize unnecessary handovers and reduce signalling overhead in heterogeneous networks, it is essential to properly model the handover decision problem. In this paper, we model the handover decision problem using Multiple Attribute Decision Making (MADM) method, specifically Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS), and propose a hybrid TOPSIS method to control the handover in heterogeneous network. The proposed method adopts a hybrid weighting policy, which is a combination of entropy and standard deviation. A hybrid weighting control parameter is introduced to balance the impact of the standard deviation and entropy weighting on the network selection process and the overall performance. Our proposed method show better performance, in terms of the number of frequent handovers and the mean user throughput, compared to the existing methods.
Keywords: Handover, HetNets, interference, MADM, small cells, TOPSIS, weight.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 577272 Duration Patterns of English by Native British Speakers and Mandarin ESL Speakers
Authors: Chen Bingru
Abstract:
This study is intended to describe and analyze the effects of polysyllabic shortening and word or phrase boundary on the duration patterns of spoken utterances by Mandarin learners of English in comparison with native speakers of English. To investigate the relative contribution of these effects, two production experiments were conducted. The study included 11 native British English speakers and 20 Mandarin learners of English who were asked to produce four sets of tokens consisting of a mono-syllabic base form, disyllabic, and trisyllabic words derived from the base by the addition of suffixes, and a set of short sentences with a particular combination of phrase size, stress pattern, and boundary location. The duration of words and segments was measured, and results from the data analysis suggest that the amount of polysyllabic shortening and the effect of word or phrase position are likely to affect a Chinese accent for Mandarin ESL speakers. This study sheds light on research on the duration patterns of language by demonstrating the effect of duration-related factors on the foreign accent of Mandarin ESL speakers. It can also benefit both L2 learners and language teachers by increasing their sensitivity to the duration differences and difficulties experienced by L2 learners of English. An understanding of the amount of polysyllabic shortening and the effect of position in words and phrase on syllable duration can also facilitate L2 teachers to establish priorities for teaching pronunciation to ESL learners.
Keywords: Duration patterns, Chinese accent, Mandarin ESL speakers, polysyllabic shortening.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 747271 Logistical Optimization of Nuclear Waste Flows during Decommissioning
Authors: G. Dottavio, M. F. Andrade, F. Renard, V. Cheutet, A.-L. L. S. Vercraene, P. Hoang, S. Briet, R. Dachicourt, Y. Baizet
Abstract:
An important number of technological equipment and high-skilled workers over long periods of time have to be mobilized during nuclear decommissioning processes. The related operations generate complex flows of waste and high inventory levels, associated to information flows of heterogeneous types. Taking into account that more than 10 decommissioning operations are on-going in France and about 50 are expected toward 2025: A big challenge is addressed today. The management of decommissioning and dismantling of nuclear installations represents an important part of the nuclear-based energy lifecycle, since it has an environmental impact as well as an important influence on the electricity cost and therefore the price for end-users. Bringing new technologies and new solutions into decommissioning methodologies is thus mandatory to improve the quality, cost and delay efficiency of these operations. The purpose of our project is to improve decommissioning management efficiency by developing a decision-support framework dedicated to plan nuclear facility decommissioning operations and to optimize waste evacuation by means of a logistic approach. The target is to create an easy-to-handle tool capable of i) predicting waste flows and proposing the best decommissioning logistics scenario and ii) managing information during all the steps of the process and following the progress: planning, resources, delays, authorizations, saturation zones, waste volume, etc. In this article we present our results from waste nuclear flows simulation during decommissioning process, including discrete-event simulation supported by FLEXSIM 3-D software. This approach was successfully tested and our works confirms its ability to improve this type of industrial process by identifying the critical points of the chain and optimizing it by identifying improvement actions. This type of simulation, executed before the start of the process operations on the basis of a first conception, allow ‘what-if’ process evaluation and help to ensure quality of the process in an uncertain context. The simulation of nuclear waste flows before evacuation from the site will help reducing the cost and duration of the decommissioning process by optimizing the planning and the use of resources, transitional storage and expensive radioactive waste containers. Additional benefits are expected for the governance system of the waste evacuation since it will enable a shared responsibility of the waste flows.
Keywords: Nuclear decommissioning, logistical optimization, decision-support framework, waste management.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1556270 A Psychophysiological Evaluation of an Effective Recognition Technique Using Interactive Dynamic Virtual Environments
Authors: Mohammadhossein Moghimi, Robert Stone, Pia Rotshtein
Abstract:
Recording psychological and physiological correlates of human performance within virtual environments and interpreting their impacts on human engagement, ‘immersion’ and related emotional or ‘effective’ states is both academically and technologically challenging. By exposing participants to an effective, real-time (game-like) virtual environment, designed and evaluated in an earlier study, a psychophysiological database containing the EEG, GSR and Heart Rate of 30 male and female gamers, exposed to 10 games, was constructed. Some 174 features were subsequently identified and extracted from a number of windows, with 28 different timing lengths (e.g. 2, 3, 5, etc. seconds). After reducing the number of features to 30, using a feature selection technique, K-Nearest Neighbour (KNN) and Support Vector Machine (SVM) methods were subsequently employed for the classification process. The classifiers categorised the psychophysiological database into four effective clusters (defined based on a 3-dimensional space – valence, arousal and dominance) and eight emotion labels (relaxed, content, happy, excited, angry, afraid, sad, and bored). The KNN and SVM classifiers achieved average cross-validation accuracies of 97.01% (±1.3%) and 92.84% (±3.67%), respectively. However, no significant differences were found in the classification process based on effective clusters or emotion labels.
Keywords: Virtual Reality, effective computing, effective VR, emotion-based effective physiological database.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 994269 Evaluation of Handover Latency in Intra- Domain Mobility
Authors: Aisha Hassan Abdalla Hashim, Fauzana Ridzuan, Nazreen Rusli
Abstract:
Mobile IPv6 (MIPv6) describes how mobile node can change its point of attachment from one access router to another. As a demand for wireless mobile devices increases, many enhancements for macro-mobility (inter-domain) protocols have been proposed, designed and implemented in Mobile IPv6. Hierarchical Mobile IPv6 (HMIPv6) is one of them that is designed to reduce the amount of signaling required and to improve handover speed for mobile connections. This is achieved by introducing a new network entity called Mobility Anchor Point (MAP). This report presents a comparative study of the Hierarchical Mobility IPv6 and Mobile IPv6 protocols and we have narrowed down the scope to micro-mobility (intra-domain). The architecture and operation of each protocol is studied and they are evaluated based on the Quality of Service (QoS) parameter; handover latency. The simulation was carried out by using the Network Simulator-2. The outcome from this simulation has been discussed. From the results, it shows that, HMIPv6 performs best under intra-domain mobility compared to MIPv6. The MIPv6 suffers large handover latency. As enhancement we proposed to HMIPv6 to locate the MAP to be in the middle of the domain with respect to all Access Routers. That gives approximately same distance between MAP and Mobile Node (MN) regardless of the new location of MN, and possible shorter distance. This will reduce the delay since the distance is shorter. As a future work performance analysis is to be carried for the proposed HMIPv6 and compared to HMIPv6.
Keywords: Intra-domain mobility, HMIPv6, Handover Latency, proposed HMIPv6.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1403268 Classifying of Maize Inbred Lines into Heterotic Groups using Diallel Analysis
Authors: Mozhgan Ziaie Bidhendi, Rajab Choukan, Farokh Darvish, Khodadad Mostafavi, Eslam Majidi
Abstract:
The selection of parents and breeding strategies for the successful maize hybrid production will be facilitated by heterotic groupings of parental lines and determination of combining abilities of them. Fourteen maize inbred lines, used in maize breeding programs in Iran, were crossed in a diallel mating design. The 91 F1 hybrids and the 14 parental lines were studied during two years at four locations of Iran for investigation of combining ability of gentypes for grain yield and to determine heterotic patterns among germplasm sources, using both, the Griffing-s method and the biplot approach for diallel analysis. The graphical representation offered by biplot analysis allowed a rapid and effective overview of general combining ability (GCA) and specific combining ability (SCA) effects of the inbred lines, their performance in crosses, as well as grouping patterns of similar genotypes. GCA and SCA effects were significant for grain yield (GY). Based on significant positive GCA effects, the lines derived from LSC could be used as parent in crosses to increase GY. The maximum best- parent heterosis values and highest SCA effects resulted from crosses B73 × MO17 and A679 × MO17 for GY. The best heterotic patterns were LSC × RYD, which would be potentially useful in maize breeding programs to obtain high-yielding hybrids in the same climate of Iran.Keywords: biplot, diallel, Griffing, Heterotic pattern
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5621267 Theoretical and Experimental Analysis of Hard Material Machining
Authors: Rajaram Kr. Gupta, Bhupendra Kumar, T. V. K. Gupta, D. S. Ramteke
Abstract:
Machining of hard materials is a recent technology for direct production of work-pieces. The primary challenge in machining these materials is selection of cutting tool inserts which facilitates an extended tool life and high-precision machining of the component. These materials are widely for making precision parts for the aerospace industry. Nickel-based alloys are typically used in extreme environment applications where a combination of strength, corrosion resistance and oxidation resistance material characteristics are required. The present paper reports the theoretical and experimental investigations carried out to understand the influence of machining parameters on the response parameters. Considering the basic machining parameters (speed, feed and depth of cut) a study has been conducted to observe their influence on material removal rate, surface roughness, cutting forces and corresponding tool wear. Experiments are designed and conducted with the help of Central Composite Rotatable Design technique. The results reveals that for a given range of process parameters, material removal rate is favorable for higher depths of cut and low feed rate for cutting forces. Low feed rates and high values of rotational speeds are suitable for better finish and higher tool life.
Keywords: Speed, feed, depth of cut, roughness, cutting force, flank wear.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1974266 Selection of Best Band Combination for Soil Salinity Studies using ETM+ Satellite Images (A Case study: Nyshaboor Region,Iran)
Authors: Sanaeinejad, S. H.; A. Astaraei, . P. Mirhoseini.Mousavi, M. Ghaemi,
Abstract:
One of the main environmental problems which affect extensive areas in the world is soil salinity. Traditional data collection methods are neither enough for considering this important environmental problem nor accurate for soil studies. Remote sensing data could overcome most of these problems. Although satellite images are commonly used for these studies, however there are still needs to find the best calibration between the data and real situations in each specified area. Neyshaboor area, North East of Iran was selected as a field study of this research. Landsat satellite images for this area were used in order to prepare suitable learning samples for processing and classifying the images. 300 locations were selected randomly in the area to collect soil samples and finally 273 locations were reselected for further laboratory works and image processing analysis. Electrical conductivity of all samples was measured. Six reflective bands of ETM+ satellite images taken from the study area in 2002 were used for soil salinity classification. The classification was carried out using common algorithms based on the best composition bands. The results showed that the reflective bands 7, 3, 4 and 1 are the best band composition for preparing the color composite images. We also found out, that hybrid classification is a suitable method for identifying and delineation of different salinity classes in the area.
Keywords: Soil salinity, Remote sensing, Image processing, ETM+, Nyshaboor
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2021265 High Specific Speed in Circulating Water Pump Can Cause Cavitation, Noise and Vibration
Authors: Chandra Gupt Porwal
Abstract:
Excessive vibration means increased wear, increased repair efforts, bad product selection & quality and high energy consumption. This may be sometimes experienced by cavitation or suction/discharge recirculation which could occur only when net positive suction head available NPSHA drops below the net positive suction head required NPSHR. Cavitation can cause axial surging, if it is excessive, will damage mechanical seals, bearings, possibly other pump components frequently, and shorten the life of the impeller. Efforts have been made to explain Suction Energy (SE), Specific Speed (Ns), Suction Specific Speed (Nss), NPSHA, NPSHR & their significance, possible reasons of cavitation /internal recirculation, its diagnostics and remedial measures to arrest and prevent cavitation in this paper. A case study is presented by the author highlighting that the root cause of unwanted noise and vibration is due to cavitation, caused by high specific speeds or inadequate net- positive suction head available which results in damages to material surfaces of impeller & suction bells and degradation of machine performance, its capacity and efficiency too. Author strongly recommends revisiting the technical specifications of CW pumps to provide sufficient NPSH margin ratios >1.5, for future projects and Nss be limited to 8500 - 9000 for cavitation free operation.
Keywords: Best efficiency point (BEP), Net positive suction head NPSHA, NPSHR, Specific Speed NS, Suction Specific Speed Nss.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5065264 Experimental and Numerical Simulation of Fire in a Scaled Underground Station
Authors: Nuri Yucel, Muhammed Ilter Berberoglu, Salih Karaaslan, Nureddin Dinler
Abstract:
The objective of this study is to investigate fire behaviors, experimentally and numerically, in a scaled version of an underground station. The effect of ventilation velocity on the fire is examined. Fire experiments are simulated by burning 10 ml isopropyl alcohol fuel in a fire pool with dimensions 5cm x 10cm x 4 mm at the center of 1/100 scaled underground station model. A commercial CFD program FLUENT was used in numerical simulations. For air flow simulations, k-ω SST turbulence model and for combustion simulation, non-premixed combustion model are used. This study showed that, the ventilation velocity is increased from 1 m/s to 3 m/s the maximum temperature in the station is found to be less for ventilation velocity of 1 m/s. The reason for these experimental result lies on the relative dominance of oxygen supply effect on cooling effect. Without piston effect, maximum temperature occurs above the fuel pool. However, when the ventilation velocity increased the flame was tilted in the direction of ventilation and the location of maximum temperature moves along the flow direction. The velocities measured experimentally in the station at different locations are well matched by the CFD simulation results. The prediction of general flow pattern is satisfactory with the smoke visualization tests. The backlayering in velocity is well predicted by CFD simulation. However, all over the station, the CFD simulations predicted higher temperatures compared to experimental measurements.Keywords: Fire, underground station, flame propagation, CFDsimulation, k-ω SST turbulence model, non-premixed combustionmodel.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2643263 Instant Location Detection of Objects Moving at High-Speedin C-OTDR Monitoring Systems
Authors: Andrey V. Timofeev
Abstract:
The practical efficient approach is suggested to estimate the high-speed objects instant bounds in C-OTDR monitoring systems. In case of super-dynamic objects (trains, cars) is difficult to obtain the adequate estimate of the instantaneous object localization because of estimation lag. In other words, reliable estimation coordinates of monitored object requires taking some time for data observation collection by means of C-OTDR system, and only if the required sample volume will be collected the final decision could be issued. But it is contrary to requirements of many real applications. For example, in rail traffic management systems we need to get data of the dynamic objects localization in real time. The way to solve this problem is to use the set of statistical independent parameters of C-OTDR signals for obtaining the most reliable solution in real time. The parameters of this type we can call as «signaling parameters» (SP). There are several the SP’s which carry information about dynamic objects instant localization for each of COTDR channels. The problem is that some of these parameters are very sensitive to dynamics of seismoacoustic emission sources, but are non-stable. On the other hand, in case the SP is very stable it becomes insensitive as rule. This report contains describing of the method for SP’s co-processing which is designed to get the most effective dynamic objects localization estimates in the C-OTDR monitoring system framework.
Keywords: C-OTDR-system, co-processing of signaling parameters, high-speed objects localization, multichannel monitoring systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1909262 Strategic Thinking to Change Behavior and Improve Sanitation in Jodipan and Kesatrian, Malang, East Java, Indonesia
Authors: Prasanti Widyasih Sarli, Prayatni Soewondo
Abstract:
Greater access to sanitation in developing countries is urgent. However even though sanitation is crucial, overall budget for sanitation is limited. With this budget limitation, it is important to (1) allocate resources strategically to maximize impact and (2) take into account communal agency to potentially be a source for sanitation improvements. The Jodipan and Kesatrian Project in Malang, Indonesia is an interesting alternative for solving the sanitation problem in which resources were allocated strategically and communal agency was also observed. Although the projects initial goal was only to improve visually the situation in the slums, it became a new tourist destination, and the economic benefit that came with it had an effect also on the change of behavior of the residents and the government towards sanitation. It also grew from only including the Kesatrian Village to expanding to the Jodipan Village in the course of less than a year. To investigate the success of this project, in this paper a descriptive model will be used and data will be drawn from intensive interviews with the initiators of the project, residents affected by the project and government officials. In this research it is argued that three points mark the success of the project: (1) the strategic initial impact due to choice of location, (2) the influx of tourists that triggered behavioral change among residents and, (3) the direct economic impact which ensured its sustainability and growth by gaining government officials support and attention for more public spending in the area for slum development and sanitation improvement.Keywords: Behavior change, sanitation, slum, strategic thinking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 999261 A Text Mining Technique Using Association Rules Extraction
Authors: Hany Mahgoub, Dietmar Rösner, Nabil Ismail, Fawzy Torkey
Abstract:
This paper describes text mining technique for automatically extracting association rules from collections of textual documents. The technique called, Extracting Association Rules from Text (EART). It depends on keyword features for discover association rules amongst keywords labeling the documents. In this work, the EART system ignores the order in which the words occur, but instead focusing on the words and their statistical distributions in documents. The main contributions of the technique are that it integrates XML technology with Information Retrieval scheme (TFIDF) (for keyword/feature selection that automatically selects the most discriminative keywords for use in association rules generation) and use Data Mining technique for association rules discovery. It consists of three phases: Text Preprocessing phase (transformation, filtration, stemming and indexing of the documents), Association Rule Mining (ARM) phase (applying our designed algorithm for Generating Association Rules based on Weighting scheme GARW) and Visualization phase (visualization of results). Experiments applied on WebPages news documents related to the outbreak of the bird flu disease. The extracted association rules contain important features and describe the informative news included in the documents collection. The performance of the EART system compared with another system that uses the Apriori algorithm throughout the execution time and evaluating extracted association rules.
Keywords: Text mining, data mining, association rule mining
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4439260 Systematic Mapping Study of Digitization and Analysis of Manufacturing Data
Authors: R. Clancy, M. Ahern, D. O’Sullivan, K. Bruton
Abstract:
The manufacturing industry is currently undergoing a digital transformation as part of the mega-trend Industry 4.0. As part of this phase of the industrial revolution, traditional manufacturing processes are being combined with digital technologies to achieve smarter and more efficient production. To successfully digitally transform a manufacturing facility, the processes must first be digitized. This is the conversion of information from an analogue format to a digital format. The objective of this study was to explore the research area of digitizing manufacturing data as part of the worldwide paradigm, Industry 4.0. The formal methodology of a systematic mapping study was utilized to capture a representative sample of the research area and assess its current state. Specific research questions were defined to assess the key benefits and limitations associated with the digitization of manufacturing data. Research papers were classified according to the type of research and type of contribution to the research area. Upon analyzing 54 papers identified in this area, it was noted that 23 of the papers originated in Germany. This is an unsurprising finding as Industry 4.0 is originally a German strategy with supporting strong policy instruments being utilized in Germany to support its implementation. It was also found that the Fraunhofer Institute for Mechatronic Systems Design, in collaboration with the University of Paderborn in Germany, was the most frequent contributing Institution of the research papers with three papers published. The literature suggested future research directions and highlighted one specific gap in the area. There exists an unresolved gap between the data science experts and the manufacturing process experts in the industry. The data analytics expertise is not useful unless the manufacturing process information is utilized. A legitimate understanding of the data is crucial to perform accurate analytics and gain true, valuable insights into the manufacturing process. There lies a gap between the manufacturing operations and the information technology/data analytics departments within enterprises, which was borne out by the results of many of the case studies reviewed as part of this work. To test the concept of this gap existing, the researcher initiated an industrial case study in which they embedded themselves between the subject matter expert of the manufacturing process and the data scientist. Of the papers resulting from the systematic mapping study, 12 of the papers contributed a framework, another 12 of the papers were based on a case study, and 11 of the papers focused on theory. However, there were only three papers that contributed a methodology. This provides further evidence for the need for an industry-focused methodology for digitizing and analyzing manufacturing data, which will be developed in future research.
Keywords: Analytics, digitization, industry 4.0, manufacturing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 736259 Changing the Way South Africa Think about Parking Provision at Tertiary Institutions
Authors: M. C. Venter, G. Hitge, S. C. Krygsman, J. Thiart
Abstract:
For decades, South Africa has been planning transportation systems from a supply, rather than a demand side, perspective. In terms of parking, this relates to requiring the minimum parking provision that is enforced by city officials. Newer insight is starting to indicate that South Africa needs to re-think this philosophy in light of a new policy environment that desires a different outcome. Urban policies have shifted from reliance on the private car for access, to employing a wide range of alternative modes. Car dominated travel is influenced by various parameters, of which the availability and location of parking plays a significant role. The question is therefore, what is the right strategy to achieve the desired transport outcomes for SA. The focus of this paper is used to assess this issue with regard to parking provision, and specifically at a tertiary institution. A parking audit was conducted at the Stellenbosch campus of Stellenbosch University, monitoring occupancy at all 60 parking areas, every hour during business hours over a five-day period. The data from this survey was compared with the prescribed number of parking bays according to the Stellenbosch Municipality zoning scheme (requiring a minimum of 0.4 bays per student). The analysis shows that by providing 0.09 bays per student, the maximum total daily occupation of all the parking areas did not exceed an 80% occupation rate. It is concluded that the prevailing parking standards are not supportive of the new urban and transport policy environment, but that it is extremely conservative from a practical demand point of view.
Keywords: Parking provision, parking requirements, travel behaviour, travel demand management.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 681258 From Electroencephalogram to Epileptic Seizures Detection by Using Artificial Neural Networks
Authors: Gaetano Zazzaro, Angelo Martone, Roberto V. Montaquila, Luigi Pavone
Abstract:
Seizure is the main factor that affects the quality of life of epileptic patients. The diagnosis of epilepsy, and hence the identification of epileptogenic zone, is commonly made by using continuous Electroencephalogram (EEG) signal monitoring. Seizure identification on EEG signals is made manually by epileptologists and this process is usually very long and error prone. The aim of this paper is to describe an automated method able to detect seizures in EEG signals, using knowledge discovery in database process and data mining methods and algorithms, which can support physicians during the seizure detection process. Our detection method is based on Artificial Neural Network classifier, trained by applying the multilayer perceptron algorithm, and by using a software application, called Training Builder that has been developed for the massive extraction of features from EEG signals. This tool is able to cover all the data preparation steps ranging from signal processing to data analysis techniques, including the sliding window paradigm, the dimensionality reduction algorithms, information theory, and feature selection measures. The final model shows excellent performances, reaching an accuracy of over 99% during tests on data of a single patient retrieved from a publicly available EEG dataset.
Keywords: Artificial Neural Network, Data Mining, Electroencephalogram, Epilepsy, Feature Extraction, Seizure Detection, Signal Processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1315257 Selecting Negative Examples for Protein-Protein Interaction
Authors: Mohammad Shoyaib, M. Abdullah-Al-Wadud, Oksam Chae
Abstract:
Proteomics is one of the largest areas of research for bioinformatics and medical science. An ambitious goal of proteomics is to elucidate the structure, interactions and functions of all proteins within cells and organisms. Predicting Protein-Protein Interaction (PPI) is one of the crucial and decisive problems in current research. Genomic data offer a great opportunity and at the same time a lot of challenges for the identification of these interactions. Many methods have already been proposed in this regard. In case of in-silico identification, most of the methods require both positive and negative examples of protein interaction and the perfection of these examples are very much crucial for the final prediction accuracy. Positive examples are relatively easy to obtain from well known databases. But the generation of negative examples is not a trivial task. Current PPI identification methods generate negative examples based on some assumptions, which are likely to affect their prediction accuracy. Hence, if more reliable negative examples are used, the PPI prediction methods may achieve even more accuracy. Focusing on this issue, a graph based negative example generation method is proposed, which is simple and more accurate than the existing approaches. An interaction graph of the protein sequences is created. The basic assumption is that the longer the shortest path between two protein-sequences in the interaction graph, the less is the possibility of their interaction. A well established PPI detection algorithm is employed with our negative examples and in most cases it increases the accuracy more than 10% in comparison with the negative pair selection method in that paper.Keywords: Interaction graph, Negative training data, Protein-Protein interaction, Support vector machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1702256 Dynamic Features Selection for Heart Disease Classification
Authors: Walid MOUDANI
Abstract:
The healthcare environment is generally perceived as being information rich yet knowledge poor. However, there is a lack of effective analysis tools to discover hidden relationships and trends in data. In fact, valuable knowledge can be discovered from application of data mining techniques in healthcare system. In this study, a proficient methodology for the extraction of significant patterns from the Coronary Heart Disease warehouses for heart attack prediction, which unfortunately continues to be a leading cause of mortality in the whole world, has been presented. For this purpose, we propose to enumerate dynamically the optimal subsets of the reduced features of high interest by using rough sets technique associated to dynamic programming. Therefore, we propose to validate the classification using Random Forest (RF) decision tree to identify the risky heart disease cases. This work is based on a large amount of data collected from several clinical institutions based on the medical profile of patient. Moreover, the experts- knowledge in this field has been taken into consideration in order to define the disease, its risk factors, and to establish significant knowledge relationships among the medical factors. A computer-aided system is developed for this purpose based on a population of 525 adults. The performance of the proposed model is analyzed and evaluated based on set of benchmark techniques applied in this classification problem.Keywords: Multi-Classifier Decisions Tree, Features Reduction, Dynamic Programming, Rough Sets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2533255 Portfolio Management for Construction Company during Covid-19 Using AHP Technique
Authors: Sareh Rajabi, Salwa Bheiry
Abstract:
In general, Covid-19 created many financial and non-financial damages to the economy and community. Level and severity of covid-19 as pandemic case varies over the region and due to different types of the projects. Covid-19 virus emerged as one of the most imperative risk management factors word-wide recently. Therefore, as part of portfolio management assessment, it is essential to evaluate severity of such risk on the project and program in portfolio management level to avoid any risky portfolio. Covid-19 appeared very effectively in South America, part of Europe and Middle East. Such pandemic infection affected the whole universe, due to lock down, interruption in supply chain management, health and safety requirements, transportations and commercial impacts. Therefore, this research proposes Analytical Hierarchy Process (AHP) to analyze and assess such pandemic case like Covid-19 and its impacts on the construction projects. The AHP technique uses four sub-criteria: Health and safety, commercial risk, completion risk and contractual risk to evaluate the project and program. The result will provide the decision makers with information which project has higher or lower risk in case of Covid-19 and pandemic scenario. Therefore, the decision makers can have most feasible solution based on effective weighted criteria for project selection within their portfolio to match with the organization’s strategies.
Keywords: Portfolio management, risk management, COVID-19, analytical hierarchy process technique.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 832