Search results for: Graph drawing.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 456

Search results for: Graph drawing.

66 Flood Predicting in Karkheh River Basin Using Stochastic ARIMA Model

Authors: Karim Hamidi Machekposhti, Hossein Sedghi, Abdolrasoul Telvari, Hossein Babazadeh

Abstract:

Floods have huge environmental and economic impact. Therefore, flood prediction is given a lot of attention due to its importance. This study analysed the annual maximum streamflow (discharge) (AMS or AMD) of Karkheh River in Karkheh River Basin for flood predicting using ARIMA model. For this purpose, we use the Box-Jenkins approach, which contains four-stage method model identification, parameter estimation, diagnostic checking and forecasting (predicting). The main tool used in ARIMA modelling was the SAS and SPSS software. Model identification was done by visual inspection on the ACF and PACF. SAS software computed the model parameters using the ML, CLS and ULS methods. The diagnostic checking tests, AIC criterion, RACF graph and RPACF graphs, were used for selected model verification. In this study, the best ARIMA models for Annual Maximum Discharge (AMD) time series was (4,1,1) with their AIC value of 88.87. The RACF and RPACF showed residuals’ independence. To forecast AMD for 10 future years, this model showed the ability of the model to predict floods of the river under study in the Karkheh River Basin. Model accuracy was checked by comparing the predicted and observation series by using coefficient of determination (R2).

Keywords: Time series modelling, stochastic processes, ARIMA model, Karkheh River.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 990
65 A Systems Approach to Gene Ranking from DNA Microarray Data of Cervical Cancer

Authors: Frank Emmert Streib, Matthias Dehmer, Jing Liu, Max Mühlhauser

Abstract:

In this paper we present a method for gene ranking from DNA microarray data. More precisely, we calculate the correlation networks, which are unweighted and undirected graphs, from microarray data of cervical cancer whereas each network represents a tissue of a certain tumor stage and each node in the network represents a gene. From these networks we extract one tree for each gene by a local decomposition of the correlation network. The interpretation of a tree is that it represents the n-nearest neighbor genes on the n-th level of a tree, measured by the Dijkstra distance, and, hence, gives the local embedding of a gene within the correlation network. For the obtained trees we measure the pairwise similarity between trees rooted by the same gene from normal to cancerous tissues. This evaluates the modification of the tree topology due to progression of the tumor. Finally, we rank the obtained similarity values from all tissue comparisons and select the top ranked genes. For these genes the local neighborhood in the correlation networks changes most between normal and cancerous tissues. As a result we find that the top ranked genes are candidates suspected to be involved in tumor growth and, hence, indicates that our method captures essential information from the underlying DNA microarray data of cervical cancer.

Keywords: Graph similarity, DNA microarray data, cancer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1706
64 Multi-agent On-line Monitor for the Safety of Critical Systems

Authors: Amer A. Dheedan

Abstract:

Operational safety of critical systems, such as nuclear power plants, industrial chemical processes and means of transportation, is a major concern for system engineers and operators. A means to assure that is on-line safety monitors that deliver three safety tasks; fault detection and diagnosis, alarm annunciation and fault controlling. While current monitors deliver these tasks, benefits and limitations in their approaches have at the same time been highlighted. Drawing from those benefits, this paper develops a distributed monitor based on semi-independent agents, i.e. a multiagent system, and monitoring knowledge derived from a safety assessment model of the monitored system. Agents are deployed hierarchically and provided with knowledge portions and collaboration protocols to reason and integrate over the operational conditions of the components of the monitored system. The monitor aims to address limitations arising from the large-scale, complicated behaviour and distributed nature of monitored systems and deliver the aforementioned three monitoring tasks effectively.

Keywords: Alarm annunciation, fault controlling, fault detection and diagnosis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1568
63 An Optimization of the New Die Design of Sheet Hydroforming by Taguchi Method

Authors: M. Hosseinzadeh, S. A. Zamani, A. Taheri

Abstract:

During the last few years, several sheet hydroforming processes have been introduced. Despite the advantages of these methods, they have some limitations. Of the processes, the two main ones are the standard hydroforming and hydromechanical deep drawing. A new sheet hydroforming die set was proposed that has the advantages of both processes and eliminates their limitations. In this method, a polyurethane plate was used as a part of the die-set to control the blank holder force. This paper outlines the Taguchi optimization methodology, which is applied to optimize the effective parameters in forming cylindrical cups by the new die set of sheet hydroforming process. The process parameters evaluated in this research are polyurethane hardness, polyurethane thickness, forming pressure path and polyurethane hole diameter. The design of experiments based upon L9 orthogonal arrays by Taguchi was used and analysis of variance (ANOVA) was employed to analyze the effect of these parameters on the forming pressure. The analysis of the results showed that the optimal combination for low forming pressure is harder polyurethane, bigger diameter of polyurethane hole and thinner polyurethane. Finally, the confirmation test was derived based on the optimal combination of parameters and it was shown that the Taguchi method is suitable to examine the optimization process.

Keywords: Sheet Hydroforming, Optimization, Taguchi Method

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2557
62 Gas Condensing Unit with Inner Heat Exchanger

Authors: Dagnija Blumberga, Toms Prodanuks, Ivars Veidenbergs, Andra Blumberga

Abstract:

Gas condensing units with inner tubes heat exchangers represent third generation technology and differ from second generation heat and mass transfer units, which are fulfilled by passive filling material layer. The first one improves heat and mass transfer by increasing cooled contact surface of gas and condensate drops and film formed in inner tubes heat exchanger. This paper presents a selection of significant factors which influence the heat and mass transfer. Experimental planning is based on the research and analysis of main three independent variables; velocity of water and gas as well as density of spraying. Empirical mathematical models show that the coefficient of heat transfer is used as dependent parameter which depends on two independent variables; water and gas velocity. Empirical model is proved by the use of experimental data of two independent gas condensing units in Lithuania and Russia. Experimental data are processed by the use of heat transfer criteria-Kirpichov number. Results allow drawing the graphical nomogram for the calculation of heat and mass transfer conditions in the innovative and energy efficient gas cooling unit.

Keywords: Gas condensing unit, filling, inner heat exchanger, package, spraying, tunes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1414
61 Application of “Streamlined” Material Accounting to Estimate Environmental Impact

Authors: Paul Osmond

Abstract:

This paper reports a new application of material accounting techniques to characterise and quantify material stocks and flows at the “neighbourhood" scale. The study area is the main campus of the University of New South Wales in Sydney, Australia. The system boundary is defined by the urban structural unit (USU), a typological construct devised to facilitate assessment of the metabolism of urban systems. A streamlined material flow analysis (MFA) was applied to quantify the stocks and flows of key construction materials within the campus USU over time, drawing on empirical data from a major campus development project. The results are reviewed to assess the efficacy of the method in supporting urban environmental evaluation and design practice, for example to facilitate estimation of significant impacts such as greenhouse gas emissions. It is concluded that linking a service (in this case, teaching students) enabled by a given product (university buildings) to the amount of materials used in creating that product offers a potential way to reduce the environmental impact of that service, through more efficient use of materials.

Keywords: Construction materials, material flow analysis, urban metabolism, urban structural unit.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1685
60 An Adaptive Dimensionality Reduction Approach for Hyperspectral Imagery Semantic Interpretation

Authors: Akrem Sellami, Imed Riadh Farah, Basel Solaiman

Abstract:

With the development of HyperSpectral Imagery (HSI) technology, the spectral resolution of HSI became denser, which resulted in large number of spectral bands, high correlation between neighboring, and high data redundancy. However, the semantic interpretation is a challenging task for HSI analysis due to the high dimensionality and the high correlation of the different spectral bands. In fact, this work presents a dimensionality reduction approach that allows to overcome the different issues improving the semantic interpretation of HSI. Therefore, in order to preserve the spatial information, the Tensor Locality Preserving Projection (TLPP) has been applied to transform the original HSI. In the second step, knowledge has been extracted based on the adjacency graph to describe the different pixels. Based on the transformation matrix using TLPP, a weighted matrix has been constructed to rank the different spectral bands based on their contribution score. Thus, the relevant bands have been adaptively selected based on the weighted matrix. The performance of the presented approach has been validated by implementing several experiments, and the obtained results demonstrate the efficiency of this approach compared to various existing dimensionality reduction techniques. Also, according to the experimental results, we can conclude that this approach can adaptively select the relevant spectral improving the semantic interpretation of HSI.

Keywords: Band selection, dimensionality reduction, feature extraction, hyperspectral imagery, semantic interpretation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1125
59 Incorporating Semantic Similarity Measure in Genetic Algorithm : An Approach for Searching the Gene Ontology Terms

Authors: Razib M. Othman, Safaai Deris, Rosli M. Illias, Hany T. Alashwal, Rohayanti Hassan, FarhanMohamed

Abstract:

The most important property of the Gene Ontology is the terms. These control vocabularies are defined to provide consistent descriptions of gene products that are shareable and computationally accessible by humans, software agent, or other machine-readable meta-data. Each term is associated with information such as definition, synonyms, database references, amino acid sequences, and relationships to other terms. This information has made the Gene Ontology broadly applied in microarray and proteomic analysis. However, the process of searching the terms is still carried out using traditional approach which is based on keyword matching. The weaknesses of this approach are: ignoring semantic relationships between terms, and highly depending on a specialist to find similar terms. Therefore, this study combines semantic similarity measure and genetic algorithm to perform a better retrieval process for searching semantically similar terms. The semantic similarity measure is used to compute similitude strength between two terms. Then, the genetic algorithm is employed to perform batch retrievals and to handle the situation of the large search space of the Gene Ontology graph. The computational results are presented to show the effectiveness of the proposed algorithm.

Keywords: Gene Ontology, Semantic similarity measure, Genetic algorithm, Ontology search

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1450
58 Numerical Solution of Manning's Equation in Rectangular Channels

Authors: Abdulrahman Abdulrahman

Abstract:

When the Manning equation is used, a unique value of normal depth in the uniform flow exists for a given channel geometry, discharge, roughness, and slope. Depending on the value of normal depth relative to the critical depth, the flow type (supercritical or subcritical) for a given characteristic of channel conditions is determined whether or not flow is uniform. There is no general solution of Manning's equation for determining the flow depth for a given flow rate, because the area of cross section and the hydraulic radius produce a complicated function of depth. The familiar solution of normal depth for a rectangular channel involves 1) a trial-and-error solution; 2) constructing a non-dimensional graph; 3) preparing tables involving non-dimensional parameters. Author in this paper has derived semi-analytical solution to Manning's equation for determining the flow depth given the flow rate in rectangular open channel. The solution was derived by expressing Manning's equation in non-dimensional form, then expanding this form using Maclaurin's series. In order to simplify the solution, terms containing power up to 4 have been considered. The resulted equation is a quartic equation with a standard form, where its solution was obtained by resolving this into two quadratic factors. The proposed solution for Manning's equation is valid over a large range of parameters, and its maximum error is within -1.586%.

Keywords: Channel design, civil engineering, hydraulic engineering, open channel flow, Manning's equation, normal depth, uniform flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2192
57 On the Difference between Cultural and Religious Identities: A Case Study of Christianity and Islam in Some African and Asian Countries

Authors: Mputu Ngandu Simon

Abstract:

Culture and religion are two of the most significant markers of an individual or group`s identity. Religion finds its expression in a given culture and culture is the costume in which a religion is dressed. In other words, there is a crucial relationship between religion and culture which should not be ignored. On the one hand, religion influences the way in which a culture is consumed. A person`s consumption of a certain cultural practice is influenced by his/her religious identity. On the other hand, the cultural identity plays an important role on how a religion is practiced by its adherents. Some cultural practices become more credible when interpreted in religious terms just as religious doctrines and dogmas need cultural interpretation to be understood by a given people, in a given context. This relationship goes so deep that sometimes the boundaries between culture and religion become blurred and people end up mixing religion and culture. In some cases, the two are considered to be one and the same thing. However, despite this apparent sameness, religion and culture are two distinct aspects of identity and they should always be considered as such. One results from knowledge while the other has beliefs as its foundation. This paper explores the difference between cultural and religious identities by drawing from existing literature on this topic as a whole, before applying that knowledge to two specific case studies: Christianity among San people of Botswana, Namibia, Angola, Zambia, Lesotho, Zimbabwe, and South Africa, and Islam in Somalia, Kenya, Ethiopia, Djibouti and Iran.

Keywords: Belief, identity, knowledge, culture, religion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 401
56 Thermal Analysis of Extrusion Process in Plastic Making

Authors: S. K. Fasogbon, T. M. Oladosu, O. S. Osasuyi

Abstract:

Plastic extrusion has been an important process of plastic production since 19th century. Meanwhile, in plastic extrusion process, wide variation in temperature along the extrudate usually leads to scraps formation on the side of finished products. To avoid this situation, there is a need to deeply understand temperature distribution along the extrudate in plastic extrusion process. This work developed an analytical model that predicts the temperature distribution over the billet (the polymers melt) along the extrudate during extrusion process with the limitation that the polymer in question does not cover biopolymer such as DNA. The model was solved and simulated. Results for two different plastic materials (polyvinylchloride and polycarbonate) using self-developed MATLAB code and a commercially developed software (ANSYS) were generated and ultimately compared. It was observed that there is a thermodynamic heat transfer from the entry level of the billet into the die down to the end of it. The graph plots indicate a natural exponential decay of temperature with time and along the die length, with the temperature being 413 K and 474 K for polyvinylchloride and polycarbonate respectively at the entry level and 299.3 K and 328.8 K at the exit when the temperature of the surrounding was 298 K. The extrusion model was validated by comparison of MATLAB code simulation with a commercially available ANSYS simulation and the results favourably agree. This work concludes that the developed mathematical model and the self-generated MATLAB code are reliable tools in predicting temperature distribution along the extrudate in plastic extrusion process.

Keywords: ANSYS, extrusion process, MATLAB, plastic making, thermal analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1794
55 Aggregation Scheduling Algorithms in Wireless Sensor Networks

Authors: Min Kyung An

Abstract:

In Wireless Sensor Networks which consist of tiny wireless sensor nodes with limited battery power, one of the most fundamental applications is data aggregation which collects nearby environmental conditions and aggregates the data to a designated destination, called a sink node. Important issues concerning the data aggregation are time efficiency and energy consumption due to its limited energy, and therefore, the related problem, named Minimum Latency Aggregation Scheduling (MLAS), has been the focus of many researchers. Its objective is to compute the minimum latency schedule, that is, to compute a schedule with the minimum number of timeslots, such that the sink node can receive the aggregated data from all the other nodes without any collision or interference. For the problem, the two interference models, the graph model and the more realistic physical interference model known as Signal-to-Interference-Noise-Ratio (SINR), have been adopted with different power models, uniform-power and non-uniform power (with power control or without power control), and different antenna models, omni-directional antenna and directional antenna models. In this survey article, as the problem has proven to be NP-hard, we present and compare several state-of-the-art approximation algorithms in various models on the basis of latency as its performance measure.

Keywords: Data aggregation, convergecast, gathering, approximation, interference, omni-directional, directional.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 763
54 Characterization of Ajebo Kaolinite Clay for Production of Natural Pozzolan

Authors: Gbenga M. Ayininuola, Olasunkanmi A. Adekitan

Abstract:

Calcined kaolinite clay (CKC) is a pozzolanic material that is current drawing research attention. This work investigates the conditions for the best performance of a CKC from a kaolinite clay source in Ajebo, Abeokuta (southwest Nigeria) known for its commercial availability. Samples from this source were subjected to X-ray diffractometry (XRD) and differential scanning calorimetry (DSC). XRD shows that kaolinite is the main mineral in the clay source. This mineral is responsible for the pozzolanic behavior of CKC. DSC indicates that the transformation from the clay to CKC occurred between 550 and 750 oC. Using this temperature range, clay samples were milled and different CKC samples were produced in an electric muffle furnace using temperatures of 550, 600, 650, 700, 750 and 800 oC respectively for 1 hour each. This was also repeated for 2 hours. The degree of de-hydroxylation (dtg) and strength activity index (SAI) were also determined for each of the CKC samples. The dtg and SAI tests were repeated two more times for each sample and averages were taken. Results showed that peak dtg occurred at 750 oC for 1 hour calcining combination (94.27%) whereas marginal differences were recorded at some lower temperatures (90.97% for 650 oC for 2 hours; 91.05% for 700 oC for 1 hour and 92.77% for 700 oC for 2 hours). Optimum SAI was reported at 700 oC for 1 hour (99.05%). Rating SAI as a better parameter than dtg, 700 oC for 1 hour combination was adopted as the best calcining condition. The paper recommends the adoption of this clay source for pozzolan production by adopting the calcining conditions established in this work.

Keywords: Calcined kaolinite clay, calcination, optimum-calcining conditions, pozzolanity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1260
53 Design of an Ensemble Learning Behavior Anomaly Detection Framework

Authors: Abdoulaye Diop, Nahid Emad, Thierry Winter, Mohamed Hilia

Abstract:

Data assets protection is a crucial issue in the cybersecurity field. Companies use logical access control tools to vault their information assets and protect them against external threats, but they lack solutions to counter insider threats. Nowadays, insider threats are the most significant concern of security analysts. They are mainly individuals with legitimate access to companies information systems, which use their rights with malicious intents. In several fields, behavior anomaly detection is the method used by cyber specialists to counter the threats of user malicious activities effectively. In this paper, we present the step toward the construction of a user and entity behavior analysis framework by proposing a behavior anomaly detection model. This model combines machine learning classification techniques and graph-based methods, relying on linear algebra and parallel computing techniques. We show the utility of an ensemble learning approach in this context. We present some detection methods tests results on an representative access control dataset. The use of some explored classifiers gives results up to 99% of accuracy.

Keywords: Cybersecurity, data protection, access control, insider threat, user behavior analysis, ensemble learning, high performance computing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1083
52 Seamless Flow of Voluminous Data in High Speed Network without Congestion Using Feedback Mechanism

Authors: T.Sheela, Dr.J.Raja

Abstract:

Continuously growing needs for Internet applications that transmit massive amount of data have led to the emergence of high speed network. Data transfer must take place without any congestion and hence feedback parameters must be transferred from the receiver end to the sender end so as to restrict the sending rate in order to avoid congestion. Even though TCP tries to avoid congestion by restricting the sending rate and window size, it never announces the sender about the capacity of the data to be sent and also it reduces the window size by half at the time of congestion therefore resulting in the decrease of throughput, low utilization of the bandwidth and maximum delay. In this paper, XCP protocol is used and feedback parameters are calculated based on arrival rate, service rate, traffic rate and queue size and hence the receiver informs the sender about the throughput, capacity of the data to be sent and window size adjustment, resulting in no drastic decrease in window size, better increase in sending rate because of which there is a continuous flow of data without congestion. Therefore as a result of this, there is a maximum increase in throughput, high utilization of the bandwidth and minimum delay. The result of the proposed work is presented as a graph based on throughput, delay and window size. Thus in this paper, XCP protocol is well illustrated and the various parameters are thoroughly analyzed and adequately presented.

Keywords: Bandwidth-Delay Product, Congestion Control, Congestion Window, TCP/IP

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1451
51 Investigating the Regulation System of the Synchronous Motor Excitation Mode Serving as a Reactive Power Source

Authors: Baghdasaryan Marinka, Ulikyan Azatuhi

Abstract:

The efficient usage of the compensation abilities of the electrical drive synchronous motors used in production processes can essentially improve the technical and economic indices of the process.  Reducing the flows of the reactive electrical energy due to the compensation of reactive power allows to significantly reduce the load losses of power in the electrical networks. As a result of analyzing the scientific works devoted to the issues of regulating the excitation of the synchronous motors, the need for comprehensive investigation and estimation of the excitation mode has been substantiated. By means of the obtained transmission functions, in the Simulink environment of the software package MATLAB, the transition processes of the excitation mode have been studied. As a result of obtaining and estimating the graph of the Nyquist plot and the transient process, the necessity of developing the Proportional-Integral-Derivative (PID) regulator has been justified. The transient processes of the system of the PID regulator have been investigated, and the amplitude–phase characteristics of the system have been estimated. The analysis of the obtained results has shown that the regulation indices of the developed system have been improved. The developed system can be successfully applied for regulating the excitation voltage of different-power synchronous motors, operating with a changing load, ensuring a value of the power coefficient close to 1.

Keywords: Transient process, synchronous motor, excitation mode, regulator, reactive power.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 636
50 ParkedGuard: An Efficient and Accurate Parked Domain Detection System Using Graphical Locality Analysis and Coarse-To-Fine Strategy

Authors: Chia-Min Lai, Wan-Ching Lin, Hahn-Ming Lee, Ching-Hao Mao

Abstract:

As world wild internet has non-stop developments, making profit by lending registered domain names emerges as a new business in recent years. Unfortunately, the larger the market scale of domain lending service becomes, the riskier that there exist malicious behaviors or malwares hiding behind parked domains will be. Also, previous work for differentiating parked domain suffers two main defects: 1) too much data-collecting effort and CPU latency needed for features engineering and 2) ineffectiveness when detecting parked domains containing external links that are usually abused by hackers, e.g., drive-by download attack. Aiming for alleviating above defects without sacrificing practical usability, this paper proposes ParkedGuard as an efficient and accurate parked domain detector. Several scripting behavioral features were analyzed, while those with special statistical significance are adopted in ParkedGuard to make feature engineering much more cost-efficient. On the other hand, finding memberships between external links and parked domains was modeled as a graph mining problem, and a coarse-to-fine strategy was elaborately designed by leverage the graphical locality such that ParkedGuard outperforms the state-of-the-art in terms of both recall and precision rates.

Keywords: Coarse-to-fine strategy, domain parking service, graphical locality analysis, parked domain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1208
49 Numerical Simulation of unsteady MHD Flow and Heat Transfer of a Second Grade Fluid with Viscous Dissipation and Joule Heating using Meshfree Approach

Authors: R. Bhargava, Sonam Singh

Abstract:

In the present study, a numerical analysis is carried out to investigate unsteady MHD (magneto-hydrodynamic) flow and heat transfer of a non-Newtonian second grade viscoelastic fluid over an oscillatory stretching sheet. The flow is induced due to an infinite elastic sheet which is stretched oscillatory (back and forth) in its own plane. Effect of viscous dissipation and joule heating are taken into account. The non-linear differential equations governing the problem are transformed into system of non-dimensional differential equations using similarity transformations. A newly developed meshfree numerical technique Element free Galerkin method (EFGM) is employed to solve the coupled non linear differential equations. The results illustrating the effect of various parameters like viscoelastic parameter, Hartman number, relative frequency amplitude of the oscillatory sheet to the stretching rate and Eckert number on velocity and temperature field are reported in terms of graphs and tables. The present model finds its application in polymer extrusion, drawing of plastic films and wires, glass, fiber and paper production etc.

Keywords: EFGM, MHD, Oscillatory stretching sheet, Unsteady, Viscoelastic

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1851
48 Metallurgical Analysis of Surface Defect in Telescopic Front Fork

Authors: Souvik Das, Janak Lal, Arthita Dey, Goutam Mukhopadhyay, Sandip Bhattacharya

Abstract:

Telescopic Front Fork (TFF) used in two wheelers, mainly motorcycle, is made from high strength steel, and is manufactured by high frequency induction welding process wherein hot rolled and pickled coils are used as input raw material for rolling of hollow tubes followed by heat treatment, surface treatment, cold drawing, tempering, etc. The final application demands superior quality TFF tubes w.r.t. surface finish and dimensional tolerances. This paper presents the investigation of two different types of failure of fork during operation. The investigation consists of visual inspection, chemical analysis, characterization of microstructure, and energy dispersive spectroscopy. In this paper, comprehensive investigations of two failed tube samples were investigated. In case of Sample #1, the result revealed that there was a pre-existing crack, known as hook crack, which leads to the cracking of the tube. Metallographic examination exhibited that during field operation the pre-existing hook crack was surfaced out leading to crack in the pipe. In case of Sample #2, presence of internal oxidation with decarburised grains inside the material indicates origin of the defect from slab stage.

Keywords: Telescopic front fork, induction welding, hook crack, internal oxidation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 777
47 Interest Rate Fluctuation Effect on Commercial Bank’s Fixed Fund Deposit in Nigeria

Authors: Okolo Chimaobi Valentine

Abstract:

Commercial banks in Nigeria adopted many strategies to attract fresh deposits including the use of high deposit rate. However, pricing of banking services moved in favor of the banks at the expense of customers, resulting in their seeking other investment alternatives rather than saving their money in the bank. Both deposit and lending rates were greatly influenced by the Central Bank of Nigeria (CBN) decision on interest rate. Therefore, commercial bank effort to attract deposits via manipulation of her rates was greatly limited, otherwise the banks will be giving out more than it earned. The study aimed at examining the relationship between interest rate and fixed fund deposit of commercial banks, how policy-controlled interest rate affected commercial bank’s fixed fund deposit The researcher employed ordinary least square technique, using, multiple linear regression, unrestricted vector auto-regression, correlation matrix test, granger causality and impulse response graph in the analysis. Commercial bank’s interest rates affected commercial bank’s fixed fund deposit significantly while policy-controlled interest rate did not significantly transmit through the commercial bank’s interest rates to affect fixed fund deposit. While commercial banks seek creative ways to expand their fixed fund deposit, policy authorities in Nigeria should better coordinate interest rate fluctuation and induce competition in the entire financial sector.

Keywords: Commercial bank, fixed fund deposit, fluctuation effects, interest rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3557
46 A Green Design for Assembly Model for Integrated Design Evaluation and Assembly and Disassembly Sequence Planning

Authors: Yuan-Jye Tseng, Fang-Yu Yu, Feng-Yi Huang

Abstract:

A green design for assembly model is presented to integrate design evaluation and assembly and disassembly sequence planning by evaluating the three activities in one integrated model. For an assembled product, an assembly sequence planning model is required for assembling the product at the start of the product life cycle. A disassembly sequence planning model is needed for disassembling the product at the end. In a green product life cycle, it is important to plan how a product can be disassembled, reused, or recycled, before the product is actually assembled and produced. Given a product requirement, there may be several design alternative cases to design the same product. In the different design cases, the assembly and disassembly sequences for producing the product can be different. In this research, a new model is presented to concurrently evaluate the design and plan the assembly and disassembly sequences. First, the components are represented by using graph based models. Next, a particle swarm optimization (PSO) method with a new encoding scheme is developed. In the new PSO encoding scheme, a particle is represented by a position matrix defining an assembly sequence and a disassembly sequence. The assembly and disassembly sequences can be simultaneously planned with an objective of minimizing the total of assembly costs and disassembly costs. The test results show that the presented method is feasible and efficient for solving the integrated design evaluation and assembly and disassembly sequence planning problem. An example product is implemented and illustrated in this paper.

Keywords: green design, assembly and disassembly sequence planning, green design for assembly, particle swarm optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1734
45 Analysis of the Learners’ Responses of the Adjusted Rorschach Comprehensive System: Critical Psychological Perspective

Authors: Mokgadi Moletsane-Kekae, Robert Kananga Mukuna

Abstract:

The study focused on the analysis of the Adjusted Rorschach Comprehensive System’s responses. The objective of this study is to analyse the participants’ response rate of the Adjusted Rorschach Comprehensive System with regards to critical psychology approach. The use of critical psychology theory in this study was crucial because it responds to the current inadequate western theory or practice in the field of psychology. The study adopted a qualitative approach and a case study design. The study was grounded on interpretivist paradigm. The sample size comprised six learners (three boys and three girls, aged of 14 years) from historically disadvantaged school in the Western Cape, South Africa. The Adjusted Rorschach Comprehensive System (ARCS) administration procedure, biographical information, semi-structured interviews, and observation were used to collect data. Data was analysed using thematic framework. The study found out that, factors that increased the response rates during the administration of ARCS were, language, seating arrangement, drawing, viewing, and describing. The study recommended that, psychological test designers take into consideration the philosophy or worldviews of the local people for whom the test is designed to minimize low response rates.

Keywords: Adjusted Rorschach comprehensive system, critical psychology, learners, responses.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1942
44 The Wider Benefits of Negotiations: Austrian Perspective on Educational Leadership as a ‘Power Game’ for Trade Unions

Authors: Rudolf Egger

Abstract:

This paper explores the relationships between the basic learning processes of leading trade union workers and their methods for coping with the changes in the life-courses of societies today. It will discuss the fragile discourse on lifelong learning in trade unions and the “production of self-techniques” to get in touch with the new economic forms. On the basis of an empirical project, different processes of the socialization of leading trade union workers will be analysed to discover the consequences of the lifelong learning discourse. The results show what competences they need to develop for the “wider benefits of negotiations”. The main challenge remains to make visible how deeply intertwined trade union learning and education are with development in an ongoing dynamic economic process, rather than a quick-fix injection of skills and information. There is a complex relationship existing between the three ‘partners’, work, learning and society forming. The author suggests that contemporary trade unions could be trendsetters who make their own learning agendas by drawing less on formal education and more on informal and non-formal learning contexts. This is in parallel with growing political and scientific consciousness of the need to arrive at new educational/vocational policies and practices.

Keywords: Lifelong learning, Trade unions, Non-formal learning, Educational/vocational policies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1198
43 Application of Cite Space Software in Visual Analysis of Land Use Coupling Research Progress

Authors: Jing Zhou, Weiqun Su, Naying Luo, Min Shang, Li Wu

Abstract:

The coupling of land use system in geographical research is mainly the coupling of pattern and process, which is essentially the human-land coupling, and is an important part of the research and discussion of human-land relationship. Based on the Web of Science database, the paper titles, authors, keywords, and references from 1997-2020 related to land use coupling were used as data sources to explore the research progress of land use coupling. Cite Space bibliometric tool was used for co-occurrence analysis of the issuing country, issuing institution, co-cited author, disciplinary institution, and keywords. The results are shown as follows: (1) From 1997 to 2020, the United States, China, and Germany rank the top, with more than 250 published papers. Although China ranks second in the number of published papers on foreign literature, it has less centrality and less influence. (2) The top 10 institutions (universities) in the number of published papers (more than 300 articles) are mainly from the United States and China, and the University of Chinese Academy of Sciences has the highest output of papers. At the same time, the phenomenon of multi-institutional cooperation has increased in the field of land use coupling research. (3) From 1997 to 2020, land sensitivity research and the impact of climate change on land use patterns are the main directions of land use coupling research. However, in the past five years, scholars have mainly focused on the coupling research methods of land use and the coupling relationship between ecological and environmental factors and land use.

Keywords: Land use coupling, cite space, knowledge graph, visual analysis, research progress.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 298
42 Identifying Dry Years by Using the Dependable Rainfall Index and Its Effects on the Olive Crop in Roudbar, Gilan, South Western of Caspian Sea

Authors: Bahman Ramezani Gourabi

Abstract:

Drought is one of the most important natural disasters which is probable to occur in all regions with completely different climates and in addition to causing death. It results in many economic losses and social consequences. For this reason. Studying the effects and losses caused by drought which include limitation or shortage of agricultural and drinking water resources. Decreased rainfall and increased evapotranspiration. Limited plant growth and decreased agricultural products. Especially those of dry-farming. Lower levels of surface and ground waters and increased immigrations. Etc. in the country is statistical period (1988-2007) for six stations in Roudbar town were used for statistical analysis and calculating humid and dry years. The dependable rainfall index (DRI) was the main method used in this research. Results showed that during the said statistical period and also during the years 1996-1998 and 2007. more than half of the stations had faced drought. With consideration of the conducted studies. Drawing diagrams and comparing the available data with those of dry and humid years it was found that drought affected agricultural products (e.g.olive) in a way that during the year 1996 1996 drought. Olive groves of Roudbar suffered the greatest damages. Whereupon about 70% of the crops were lost.

Keywords: Dependable rainfall, drought, annual rainfall, roudbar, olive, gilan.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1708
41 Enlightening Malaysia's Energy Policies and Strategies for Modernization and Sustainable Development

Authors: Hussain Ali Bekhet, Nor Salwati Othman

Abstract:

Malaysia has achieved remarkable economic growth since 1957, moving toward modernization from a predominantly agriculture base to manufacturing and—now—modern services. The development policies (i.e., New Economic Policy [1970–1990], the National Development Policy [1990–2000], and Vision 2020) have been recognized as the most important drivers of this transformation. The transformation of the economic structure has moved along with rapid gross domestic product (GDP) growth, urbanization growth, and greater demand for energy from mainly fossil fuel resources, which in turn, increase CO2 emissions. Malaysia faced a great challenge to bring down the CO2 emissions without compromising economic development. Solid policies and a strategy to reduce dependencies on fossil fuel resources and reduce CO2 emissions are needed in order to achieve sustainable development. This study provides an overview of the Malaysian economic, energy, and environmental situation, and explores the existing policies and strategies related to energy and the environment. The significance is to grasp a clear picture on what types of policies and strategies Malaysia has in hand. In the future, this examination should be extended by drawing a comparison with other developed countries and highlighting several options for sustainable development.

Keywords: Energy policies, energy efficiency, renewable energy, green building, Malaysia, sustainable development.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2617
40 Applying the Extreme-Based Teaching Model in Post-Secondary Online Classroom Setting: A Field Experiment

Authors: Leon Pan

Abstract:

The first programming course within post-secondary education has long been recognized as a challenging endeavor for both educators and students alike. Historically, these courses have exhibited high failure rates and a notable number of dropouts. Instructors often lament students' lack of effort on their coursework, and students often express frustration that the teaching methods employed are not effective. Drawing inspiration from the successful principles of Extreme Programming, this study introduces an approach—the Extremes-based teaching model—aimed at enhancing the teaching of introductory programming courses. To empirically determine the effectiveness of the model, a comparison was made between a section taught using the extreme-based model and another utilizing traditional teaching methods. Notably, the extreme-based teaching class required students to work collaboratively on projects, while also demanding continuous assessment and performance enhancement within groups. This paper details the application of the extreme-based model within the post-secondary online classroom context and presents the compelling results that emphasize its effectiveness in advancing the teaching and learning experiences. The extreme-based model led to a significant increase of 13.46 points in the weighted total average and a commendable 10% reduction in the failure rate.

Keywords: Extreme-based teaching model, innovative pedagogical methods, project-based learning, team-based learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34
39 Woman, House, Identity: The Study of the Role of House in Constructing the Contemporary Dong Minority Woman’s Identity

Authors: Sze Wai Veera Fung, Peter W. Ferretto

Abstract:

Similar to most ethnic groups in China, men of the Dong minority hold the primary position in policymaking, moral authority, social values, and the control of the property. As the spatial embodiment of the patriarchal ideals, the house plays a significant role in producing and reproducing the distinctive gender status within the Dong society. Nevertheless, Dong women do not see their home as a cage of confinement, nor do they see themselves as a victim of oppression. For these women with reference to their productive identity, a house is a dwelling place with manifold meanings, including a proof of identity, an economic instrument, and a public resource operating on the community level. This paper examines the role of the house as a central site for identity construction and maintenance for the southern dialect Dong minority women in Hunan, China. Drawing on recent interviews with the Dong women, this study argues that women as productive individuals have a strong influence on the form of their house and the immediate environment, regardless of the male-dominated social construct of the Dong society. The aim of this study is not to produce a definitive relationship between women, house, and identity. Rather, it seeks to offer an alternative lens into the complexity and diversity of gender dynamics operating in and beyond the boundary of the house in the context of contemporary rural China.

Keywords: Conception of home, Dong minority, house, rural China, woman’s identity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 319
38 Supply Chain Resilience Triangle: The Study and Development of a Framework

Authors: M. Bevilacqua, F. E. Ciarapica, G. Marcucci

Abstract:

Supply Chain Resilience has been broadly studied during the last decade, focusing the research on many aspects of Supply Chain performance. Consequently, different definitions of Supply Chain Resilience have been developed by the research community, drawing inspiration also from other fields of study such as ecology, sociology, psychology, economy et al. This way, the definitions so far developed in the extant literature are therefore very heterogeneous, and many authors have pointed out a lack of consensus in this field of analysis. The aim of this research is to find common points between these definitions, through the development of a framework of study: the Resilience Triangle. The Resilience Triangle is a tool developed in the field of civil engineering, with the objective of modeling the loss of resilience of a given structure during and after the occurrence of a disruption such as an earthquake. The Resilience Triangle is a simple yet powerful tool: in our opinion, it can summarize all the features that authors have captured in the Supply Chain Resilience definitions over the years. This research intends to recapitulate within this framework all these heterogeneities in Supply Chain Resilience research. After collecting a various number of Supply Chain Resilience definitions present in the extant literature, the methodology approach provides a taxonomy step with the scope of collecting and analyzing all the data gathered. The next step provides the comparison of the data obtained with the plotting of a disruption profile, in order to contextualize the Resilience Triangle in the Supply Chain context. The tool and the results developed in this research will allow to lay the foundation for future Supply Chain Resilience modeling and measurement work.

Keywords: Supply chain resilience, resilience definition, supply chain resilience triangle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2607
37 Automatic Detection of Defects in Ornamental Limestone Using Wavelets

Authors: Maria C. Proença, Marco Aniceto, Pedro N. Santos, José C. Freitas

Abstract:

A methodology based on wavelets is proposed for the automatic location and delimitation of defects in limestone plates. Natural defects include dark colored spots, crystal zones trapped in the stone, areas of abnormal contrast colors, cracks or fracture lines, and fossil patterns. Although some of these may or may not be considered as defects according to the intended use of the plate, the goal is to pair each stone with a map of defects that can be overlaid on a computer display. These layers of defects constitute a database that will allow the preliminary selection of matching tiles of a particular variety, with specific dimensions, for a requirement of N square meters, to be done on a desktop computer rather than by a two-hour search in the storage park, with human operators manipulating stone plates as large as 3 m x 2 m, weighing about one ton. Accident risks and work times are reduced, with a consequent increase in productivity. The base for the algorithm is wavelet decomposition executed in two instances of the original image, to detect both hypotheses – dark and clear defects. The existence and/or size of these defects are the gauge to classify the quality grade of the stone products. The tuning of parameters that are possible in the framework of the wavelets corresponds to different levels of accuracy in the drawing of the contours and selection of the defects size, which allows for the use of the map of defects to cut a selected stone into tiles with minimum waste, according the dimension of defects allowed.

Keywords: Automatic detection, wavelets, defects, fracture lines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1123