Search results for: Construction process
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6319

Search results for: Construction process

3649 Hash Based Block Matching for Digital Evidence Image Files from Forensic Software Tools

Authors: M. Kaya, M. Eris

Abstract:

Internet use, intelligent communication tools, and social media have all become an integral part of our daily life as a result of rapid developments in information technology. However, this widespread use increases crimes committed in the digital environment. Therefore, digital forensics, dealing with various crimes committed in digital environment, has become an important research topic. It is in the research scope of digital forensics to investigate digital evidences such as computer, cell phone, hard disk, DVD, etc. and to report whether it contains any crime related elements. There are many software and hardware tools developed for use in the digital evidence acquisition process. Today, the most widely used digital evidence investigation tools are based on the principle of finding all the data taken place in digital evidence that is matched with specified criteria and presenting it to the investigator (e.g. text files, files starting with letter A, etc.). Then, digital forensics experts carry out data analysis to figure out whether these data are related to a potential crime. Examination of a 1 TB hard disk may take hours or even days, depending on the expertise and experience of the examiner. In addition, it depends on examiner’s experience, and may change overall result involving in different cases overlooked. In this study, a hash-based matching and digital evidence evaluation method is proposed, and it is aimed to automatically classify the evidence containing criminal elements, thereby shortening the time of the digital evidence examination process and preventing human errors.

Keywords: Block matching, digital evidence, hash list.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1341
3648 Implementation of a Paraconsistent-Fuzzy Digital PID Controller in a Level Control Process

Authors: H. M. Côrtes, J. I. Da Silva Filho, M. F. Blos, B. S. Zanon

Abstract:

In a modern society the factor corresponding to the increase in the level of quality in industrial production demand new techniques of control and machinery automation. In this context, this work presents the implementation of a Paraconsistent-Fuzzy Digital PID controller. The controller is based on the treatment of inconsistencies both in the Paraconsistent Logic and in the Fuzzy Logic. Paraconsistent analysis is performed on the signals applied to the system inputs using concepts from the Paraconsistent Annotated Logic with annotation of two values (PAL2v). The signals resulting from the paraconsistent analysis are two values defined as Dc - Degree of Certainty and Dct - Degree of Contradiction, which receive a treatment according to the Fuzzy Logic theory, and the resulting output of the logic actions is a single value called the crisp value, which is used to control dynamic system. Through an example, it was demonstrated the application of the proposed model. Initially, the Paraconsistent-Fuzzy Digital PID controller was built and tested in an isolated MATLAB environment and then compared to the equivalent Digital PID function of this software for standard step excitation. After this step, a level control plant was modeled to execute the controller function on a physical model, making the tests closer to the actual. For this, the control parameters (proportional, integral and derivative) were determined for the configuration of the conventional Digital PID controller and of the Paraconsistent-Fuzzy Digital PID, and the control meshes in MATLAB were assembled with the respective transfer function of the plant. Finally, the results of the comparison of the level control process between the Paraconsistent-Fuzzy Digital PID controller and the conventional Digital PID controller were presented.

Keywords: Fuzzy logic, paraconsistent annotated logic, level control, digital PID.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1222
3647 Removal of Malachite Green from Aqueous Solution using Hydrilla verticillata -Optimization, Equilibrium and Kinetic Studies

Authors: R. Rajeshkannan, M. Rajasimman, N. Rajamohan

Abstract:

In this study, the sorption of Malachite green (MG) on Hydrilla verticillata biomass, a submerged aquatic plant, was investigated in a batch system. The effects of operating parameters such as temperature, adsorbent dosage, contact time, adsorbent size, and agitation speed on the sorption of Malachite green were analyzed using response surface methodology (RSM). The proposed quadratic model for central composite design (CCD) fitted very well to the experimental data that it could be used to navigate the design space according to ANOVA results. The optimum sorption conditions were determined as temperature - 43.5oC, adsorbent dosage - 0.26g, contact time - 200min, adsorbent size - 0.205mm (65mesh), and agitation speed - 230rpm. The Langmuir and Freundlich isotherm models were applied to the equilibrium data. The maximum monolayer coverage capacity of Hydrilla verticillata biomass for MG was found to be 91.97 mg/g at an initial pH 8.0 indicating that the optimum sorption initial pH. The external and intra particle diffusion models were also applied to sorption data of Hydrilla verticillata biomass with MG, and it was found that both the external diffusion as well as intra particle diffusion contributes to the actual sorption process. The pseudo-second order kinetic model described the MG sorption process with a good fitting.

Keywords: Response surface methodology, Hydrilla verticillata, malachite green, adsorption, central composite design

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1971
3646 An Adaptive Memetic Algorithm With Dynamic Population Management for Designing HIV Multidrug Therapies

Authors: Hassan Zarei, Ali Vahidian Kamyad, Sohrab Effati

Abstract:

In this paper, a mathematical model of human immunodeficiency virus (HIV) is utilized and an optimization problem is proposed, with the final goal of implementing an optimal 900-day structured treatment interruption (STI) protocol. Two type of commonly used drugs in highly active antiretroviral therapy (HAART), reverse transcriptase inhibitors (RTI) and protease inhibitors (PI), are considered. In order to solving the proposed optimization problem an adaptive memetic algorithm with population management (AMAPM) is proposed. The AMAPM uses a distance measure to control the diversity of population in genotype space and thus preventing the stagnation and premature convergence. Moreover, the AMAPM uses diversity parameter in phenotype space to dynamically set the population size and the number of crossovers during the search process. Three crossover operators diversify the population, simultaneously. The progresses of crossover operators are utilized to set the number of each crossover per generation. In order to escaping the local optima and introducing the new search directions toward the global optima, two local searchers assist the evolutionary process. In contrast to traditional memetic algorithms, the activation of these local searchers is not random and depends on both the diversity parameters in genotype space and phenotype space. The capability of AMAPM in finding optimal solutions compared with three popular metaheurestics is introduced.

Keywords: HIV therapy design, memetic algorithms, adaptivealgorithms, nonlinear integer programming.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1603
3645 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings

Authors: G. Candel, D. Naccache

Abstract:

t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embedding. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic, and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n2) to O(n2/k), and the memory requirement from n2 to 2(n/k)2 which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.

Keywords: Concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 469
3644 The Influence of the Geogrid Layers on the Bearing Capacity of Layered Soils

Authors: S. A. Naeini, H. R. Rahmani, M. Hossein Zade

Abstract:

Many classical bearing capacity theories assume that the natural soil's layers are homogenous for determining the bearing capacity of the soil. But, in many practical projects, we encounter multi-layer soils. Geosynthetic as reinforcement materials have been extensively used in the construction of various structures. In this paper, numerical analysis of the Plate Load Test (PLT) using of ABAQUS software in double-layered soils with different thicknesses of sandy and gravelly layers reinforced with geogrid was considered. The PLT is one of the common filed methods to calculate parameters such as soil bearing capacity, the evaluation of the compressibility and the determination of the Subgrade Reaction module. In fact, the influence of the geogrid layers on the bearing capacity of the layered soils is investigated. Finally, the most appropriate mode for the distance and number of reinforcement layers is determined. Results show that using three layers of geogrid with a distance of 0.3 times the width of the loading plate has the highest efficiency in bearing capacity of double-layer (sand and gravel) soils. Also, the significant increase in bearing capacity between unreinforced and reinforced soil with three layers of geogrid is caused by the condition that the upper layer (gravel) thickness is equal to the loading plate width.

Keywords: Bearing capacity, reinforcement, geogrid, plate load test, layered soils.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 821
3643 Studying the Value-Added Chain for the Fish Distribution Process at Quang Binh Fishing Port in Vietnam

Authors: Van Chung Nguyen

Abstract:

The purpose of this research is to study the current status of the value chain for fish distribution at Quang Binh Fishing Port with 360 research samples, in which the research subjects are fishermen, traders, retailers, and businesses. The research uses the approach of applying the value chain theoretical framework of Kaplinsky and Morris to quantify and describe market channels and actors participating in the value chain and analyze the value-added process of these companies according to market channels. The analysis results show that fishermen directly catch fish with high economic efficiency, but processing enterprises and, especially retailers, are the agents to obtain higher added value. Processing enterprises play a role that is not really clear due to outdated processing technology; in contrast, retailers have the highest added value. This shows that the added value of the fish supply chain at Quang Binh fishing port is still limited, leading to low output quality. Therefore, the selling price of fish to the market is still high compared to the abundant fish resources, leading to low consumption and limiting exports due to the quality of processing enterprises. This reduces demand and fishing capacity, and productivity is lower than potential. To improve the fish value chain at fishing ports, it is necessary to focus on improving product quality, strengthening linkages between actors, building brands and product consumption markets at the same time, improving the capacity of export processing enterprises.

Keywords: Quang Binh fishing port, value chain, fish market, distributions channel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31
3642 Developing a Model for the Relation between Heritage and Place Identity

Authors: A. Arjomand Kermani, N. Charbgoo, M. Alalhesabi

Abstract:

In the situation of great acceleration of changes and the need for new developments in the cities on one hand and conservation and regeneration approaches on the other hand, place identity and its relation with heritage context have taken on new importance. This relation is generally mutual and complex one. The significant point in this relation is that the process of identifying something as heritage rather than just historical  phenomena, brings that which may be inherited into the realm of identity. In planning and urban design as well as environmental psychology and phenomenology domain, place identity and its attributes and components were studied and discussed. However, the relation between physical environment (especially heritage) and identity has been neglected in the planning literature. This article aims to review the knowledge on this field and develop a model on the influence and relation of these two major concepts (heritage and identity). To build this conceptual model, we draw on available literature in environmental psychology as well as planning on place identity and heritage environment using a descriptive-analytical methodology to understand how they can inform the planning strategies and governance policies. A cross-disciplinary analysis is essential to understand the nature of place identity and heritage context and develop a more holistic model of their relationship in order to be employed in planning process and decision making. Moreover, this broader and more holistic perspective would enable both social scientists and planners to learn from one another’s expertise for a fuller understanding of community dynamics. The result indicates that a combination of these perspectives can provide a richer understanding—not only of how planning impacts our experience of place, but also how place identity can impact community planning and development.

Keywords: heritage, Inter-disciplinary study, Place identity, planning

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1884
3641 Identification of Micromechanical Fracture Model for Predicting Fracture Performance of Steel Wires for Civil Engineering Applications

Authors: Kazeem K. Adewole, Julia M. Race, Steve J. Bull

Abstract:

The fracture performance of steel wires for civil engineering applications remains a major concern in civil engineering construction and maintenance of wire reinforced structures. The need to employ approaches that simulate micromechanical material processes which characterizes fracture in civil structures has been emphasized recently in the literature. However, choosing from the numerous micromechanics-based fracture models, and identifying their applicability and reliability remains an issue that still needs to be addressed in a greater depth. Laboratory tensile testing and finite element tensile testing simulations with the shear, ductile and Gurson-Tvergaard-Needleman’s micromechanics-based models conducted in this work reveal that the shear fracture model is an appropriate fracture model to predict the fracture performance of steel wires used for civil engineering applications. The need to consider the capability of the micromechanics-based fracture model to predict the “cup and cone” fracture exhibited by the wire in choosing the appropriate fracture model is demonstrated.

Keywords: Fracture performance, FE simulation, Shear fracture model, Ductile fracture model, Gurson-Tvergaard-Needleman fracture model, Wires.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2361
3640 A New High Speed Neural Model for Fast Character Recognition Using Cross Correlation and Matrix Decomposition

Authors: Hazem M. El-Bakry

Abstract:

Neural processors have shown good results for detecting a certain character in a given input matrix. In this paper, a new idead to speed up the operation of neural processors for character detection is presented. Such processors are designed based on cross correlation in the frequency domain between the input matrix and the weights of neural networks. This approach is developed to reduce the computation steps required by these faster neural networks for the searching process. The principle of divide and conquer strategy is applied through image decomposition. Each image is divided into small in size sub-images and then each one is tested separately by using a single faster neural processor. Furthermore, faster character detection is obtained by using parallel processing techniques to test the resulting sub-images at the same time using the same number of faster neural networks. In contrast to using only faster neural processors, the speed up ratio is increased with the size of the input image when using faster neural processors and image decomposition. Moreover, the problem of local subimage normalization in the frequency domain is solved. The effect of image normalization on the speed up ratio of character detection is discussed. Simulation results show that local subimage normalization through weight normalization is faster than subimage normalization in the spatial domain. The overall speed up ratio of the detection process is increased as the normalization of weights is done off line.

Keywords: Fast Character Detection, Neural Processors, Cross Correlation, Image Normalization, Parallel Processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1521
3639 Estimation of Asphalt Pavement Surfaces Using Image Analysis Technique

Authors: Mohammad A. Khasawneh

Abstract:

Asphalt concrete pavements gradually lose their skid resistance causing safety problems especially under wet conditions and high driving speeds. In order to enact the actual field polishing and wearing process of asphalt pavement surfaces in a laboratory setting, several laboratory-scale accelerated polishing devices were developed by different agencies. To mimic the actual process, friction and texture measuring devices are needed to quantify surface deterioration at different polishing intervals that reflect different stages of the pavement life. The test could still be considered lengthy and to some extent labor-intensive. Therefore, there is a need to come up with another method that can assist in investigating the bituminous pavement surface characteristics in a practical and time-efficient test procedure.

The purpose of this paper is to utilize a well-developed image analysis technique to characterize asphalt pavement surfaces without the need to use conventional friction and texture measuring devices in an attempt to shorten and simplify the polishing procedure in the lab.

Promising findings showed the possibility of using image analysis in lieu of the labor-sensitive-variable-in-nature friction and texture measurements. It was found that the exposed aggregate surface area of asphalt specimens made from limestone and gravel aggregates produced solid evidence of the validity of this method in describing asphalt pavement surfaces. Image analysis results correlated well with the British Pendulum Numbers (BPN), Polish Values (PV) and Mean Texture Depth (MTD) values.

Keywords: Friction, Image Analysis, Polishing, Statistical Analysis, Texture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2537
3638 Packet Forwarding with Multiprotocol Label Switching

Authors: R.N.Pise, S.A.Kulkarni, R.V.Pawar

Abstract:

MultiProtocol Label Switching (MPLS) is an emerging technology that aims to address many of the existing issues associated with packet forwarding in today-s Internetworking environment. It provides a method of forwarding packets at a high rate of speed by combining the speed and performance of Layer 2 with the scalability and IP intelligence of Layer 3. In a traditional IP (Internet Protocol) routing network, a router analyzes the destination IP address contained in the packet header. The router independently determines the next hop for the packet using the destination IP address and the interior gateway protocol. This process is repeated at each hop to deliver the packet to its final destination. In contrast, in the MPLS forwarding paradigm routers on the edge of the network (label edge routers) attach labels to packets based on the forwarding Equivalence class (FEC). Packets are then forwarded through the MPLS domain, based on their associated FECs , through swapping the labels by routers in the core of the network called label switch routers. The act of simply swapping the label instead of referencing the IP header of the packet in the routing table at each hop provides a more efficient manner of forwarding packets, which in turn allows the opportunity for traffic to be forwarded at tremendous speeds and to have granular control over the path taken by a packet. This paper deals with the process of MPLS forwarding mechanism, implementation of MPLS datapath , and test results showing the performance comparison of MPLS and IP routing. The discussion will focus primarily on MPLS IP packet networks – by far the most common application of MPLS today.

Keywords: Forwarding equivalence class, incoming label map, label, next hop label forwarding entry.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2678
3637 Evaluation of the Quality of Education Offered to Students with Special Needs in Public Schools in the City of Bauru, Brazil

Authors: V. L. M. F. Capellini, A. P. P. M. Maturana, N. C. M. Brondino, M. B. C. L. B. M. Peixoto, A. J. Broughton

Abstract:

A paradigm shift is a process. The process of implementing inclusive education, a system constructed to support all learners, requires planning, identification, experimentation, and evaluation. In this vein, the purpose of the present study was to evaluate the capacity of one Brazilian state school systems to provide special education students with a quality inclusive education. This study originated at the behest of concerned families of students with special needs who filed complaints with the Municipality of Bauru, São Paulo. These families claimed, 1) children with learning differences and educational needs had not been identified for services, and 2) those who had been identified had not received sufficient specialized educational assistance (SEA) in schools across the City of Bauru. Hence, the Office of Civil Rights for the state of São Paulo (Ministério Público de São Paulo) summoned the local higher education institution, UNESP, to design a research study to investigate these allegations. In this exploratory study, descriptive data were gathered from all elementary and middle schools including 58 state schools and 17 city schools, for a total of 75 schools overall. Data collection consisted of each school's annual strategic action plan, surveys and interviews with all school stakeholders to determine their perceptions of the inclusive education available to students with Special Education Needs (SEN). The data were collected as one of four stages in a larger study which also included field observations of a focal students' experience and a continuing education course for all teachers and administrators in both state and city schools. For the purposes of this study, the researchers were interested in understanding the perceptions of school staff, parents, and students across all schools. Therefore, documents and surveys from 75 schools were analyzed for adherence to federal legislation guaranteeing students with SEN the right to special education assistance within the regular school setting. Results shows that while some schools recognized the legal rights of SEN students to receive special education, the plans to actually deliver services were absent. In conclusion, the results of this study revealed both school staff and families have insufficient planning and accessibility resources, and the schools have inadequate infrastructure for full-time support to SEN students, i.e., structures and systems to support the identification of SEN and delivery of services within schools of Bauru, SP. Having identified the areas of need, the city is now prepared to take next steps in the process toward preparing all schools to be inclusive.

Keywords: Inclusive education, special education, special needs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 995
3636 Ghost Frequency Noise Reduction through Displacement Deviation Analysis

Authors: Paua Ketan, Bhagate Rajkumar, Adiga Ganesh, M. Kiran

Abstract:

Low gear noise is an important sound quality feature in modern passenger cars. Annoying gear noise from the gearbox is influenced by the gear design, gearbox shaft layout, manufacturing deviations in the components, assembly errors and the mounting arrangement of the complete gearbox. Geometrical deviations in the form of profile and lead errors are often present on the flanks of the inspected gears. Ghost frequencies of a gear are very challenging to identify in standard gear measurement and analysis process due to small wavelengths involved. In this paper, gear whine noise occurring at non-integral multiples of gear mesh frequency of passenger car gearbox is investigated and the root cause is identified using the displacement deviation analysis (DDA) method. DDA method is applied to identify ghost frequency excitations on the flanks of gears arising out of generation grinding. Frequency identified through DDA correlated with the frequency of vibration and noise on the end-of-line machine as well as vehicle level measurements. With the application of DDA method along with standard lead profile measurement, gears with ghost frequency geometry deviations were identified on the production line to eliminate defective parts and thereby eliminate ghost frequency noise from a vehicle. Further, displacement deviation analysis can be used in conjunction with the manufacturing process simulation to arrive at suitable countermeasures for arresting the ghost frequency.

Keywords: Displacement deviation analysis, gear whine, ghost frequency, sound quality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 757
3635 Evaluating Efficiency of Nina Distribution Company Using Window Data Envelopment Analysis and Malmquist Index

Authors: Hossein Taherian Far, Ali Bazaee

Abstract:

Achieving continuous sustained economic growth and following economic development can be the target for all countries which are looking for it. In this regard, distribution industry plays an important role in growth and development of any nation. So, estimating the efficiency and productivity of the so called industry and identifying factors influencing it, is very necessary. The objective of the present study is to measure the efficiency and productivity of seven branches of Nina Distribution Company using window data envelopment analysis and Malmquist productivity index from spring 2013 to summer 2015. In this study, using criteria of fixed assets, payroll personnel, operating costs and duration of collection of receivables were selected as inputs and people and net sales, gross profit and percentage of coverage to customers were selected as outputs. Then, the process of performance window data envelopment analysis was driven and process efficiency has been measured using Malmquist index. The results indicate that the average technical efficiency of window Data Envelopment Analysis (DEA) model and fluctuating trend is sustainable. But the average management efficiency in window DEA model is related with negative growth (decline) of about 13%. The mean scale efficiency in all windows, except in the second one which is faced with 8%, shows growth of 18% compared to the first window. On the other hand, the mean change in total factor productivity in all branches of the industry shows average negative growth (decrease) of 12% which are the result of a negative change in technology.

Keywords: Nina Distribution Company branches, window data envelopment analysis, Malmquist productivity index.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1151
3634 Waste-Based Surface Modification to Enhance Corrosion Resistance of Aluminium Bronze Alloy

Authors: Wilson Handoko, Farshid Pahlevani, Isha Singla, Himanish Kumar, Veena Sahajwalla

Abstract:

Aluminium bronze alloys are well known for their superior abrasion, tensile strength and non-magnetic properties, due to the co-presence of iron (Fe) and aluminium (Al) as alloying elements and have been commonly used in many industrial applications. However, continuous exposure to the marine environment will accelerate the risk of a tendency to Al bronze alloys parts failures. Although a higher level of corrosion resistance properties can be achieved by modifying its elemental composition, it will come at a price through the complex manufacturing process and increases the risk of reducing the ductility of Al bronze alloy. In this research, the use of ironmaking slag and waste plastic as the input source for surface modification of Al bronze alloy was implemented. Microstructural analysis conducted using polarised light microscopy and scanning electron microscopy (SEM) that is equipped with energy dispersive spectroscopy (EDS). An electrochemical corrosion test was carried out through Tafel polarisation method and calculation of protection efficiency against the base-material was determined. Results have indicated that uniform modified surface which is as the result of selective diffusion process, has enhanced corrosion resistance properties up to 12.67%. This approach has opened a new opportunity to access various industrial utilisations in commercial scale through minimising the dependency on natural resources by transforming waste sources into the protective coating in environmentally friendly and cost-effective ways.

Keywords: Aluminium bronze, waste-based surface modification, Tafel polarisation, corrosion resistance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1030
3633 Early Registration : Criterion to Improve Communication-Inter Agents in Mobile-IP Protocol

Authors: Hossam el-ddin Mostafa, Pavel Čičak

Abstract:

In IETF RFC 2002, Mobile-IP was developed to enable Laptobs to maintain Internet connectivity while moving between subnets. However, the packet loss that comes from switching subnets arises because network connectivity is lost while the mobile host registers with the foreign agent and this encounters large end-to-end packet delays. The criterion to initiate a simple and fast full-duplex connection between the home agent and foreign agent, to reduce the roaming duration, is a very important issue to be considered by a work in this paper. State-transition Petri-Nets of the modeling scenario-based CIA: communication inter-agents procedure as an extension to the basic Mobile-IP registration process was designed and manipulated to describe the system in discrete events. The heuristic of configuration file during practical Setup session for registration parameters, on Cisco platform Router-1760 using IOS 12.3 (15)T and TFTP server S/W is created. Finally, stand-alone performance simulations from Simulink Matlab, within each subnet and also between subnets, are illustrated for reporting better end-toend packet delays. Results verified the effectiveness of our Mathcad analytical manipulation and experimental implementation. It showed lower values of end-to-end packet delay for Mobile-IP using CIA procedure-based early registration. Furthermore, it reported packets flow between subnets to improve losses between subnets.

Keywords: Cisco configuration, handoff, Mobile-IP, packetdelay, Petri-Nets, registration process, Simulink

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1370
3632 CPT Pore Water Pressure Correlations with PDA to Identify Pile Drivability Problem

Authors: Fauzi Jarushi, Paul Cosentino, Edward Kalajian, Hadeel Dekhn

Abstract:

At certain depths during large diameter displacement pile driving, rebound well over 0.25 inches was experienced, followed by a small permanent-set during each hammer blow. High pile rebound (HPR) soils may stop the pile driving and results in a limited pile capacity. In some cases, rebound leads to pile damage, delaying the construction project, and the requiring foundations redesign. HPR was evaluated at seven Florida sites, during driving of square precast, prestressed concrete piles driven into saturated, fine silty to clayey sands and sandy clays. Pile Driving Analyzer (PDA) deflection versus time data recorded during installation, was used to develop correlations between cone penetrometer (CPT) pore-water pressures, pile displacements and rebound. At five sites where piles experienced excessive HPR with minimal set, the pore pressure yielded very high positive values of greater than 20 tsf. However, at the site where the pile rebounded, followed by an acceptable permanent-set, the measured pore pressure ranged between 5 and 20 tsf. The pore pressure exhibited values of less than 5 tsf at the site where no rebound was noticed. In summary, direct correlations between CPTu pore pressure and rebound were produced, allowing identification of soils that produce HPR.

Keywords: CPTu, pore water pressure, pile rebound.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2656
3631 Safety Culture Implementation Based on Occupational Health and Safety Assessment

Authors: Nyambayar Davaadorj, Ichiro Koshijima

Abstract:

Safety or the state of being safe can be described as a condition of being not dangerous or not harmful. It is necessary for an individual to avoid dangerous situations every day. Also, an organization is subject to legal requirements for the health and safety of persons inside and around the immediate workplace, or who are exposed to the workplace activities. Although it might be difficult to keep a situation where complete safety is ensured, efforts must nonetheless be made to consider ways of removing any potential danger within an organization. In order to ensure a safe working environment, the capability of responding (i.e., resilience) to signals (i.e., information concerning events that could pose future problems that must be taken into account) that occur in and around corporations is necessary. The ability to evaluate this essential point is thus one way in which safety and security can be managed. This study focuses on OHSAS18001, an internationally applied standard for the construction and operation of occupational health and safety management systems, by using IDEF0 for Function Modeling (IDEF0) and the Resilience Matrix originally made by Bracco. Further, this study discusses a method for evaluating a manner in which Occupational Health and Safety Assessment Series (OHSAS) systematically functions within corporations. Based on the findings, this study clarifies the potential structural objection for corporations when implementing and operating the OHSAS standard.

Keywords: OHSAS18001, IDEF0, safety culture, resilience engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1413
3630 Gender Differences in Negotiation: Considering the Usual Driving Forces?

Authors: Claude Alavoine, Ferkan Kaplanseren

Abstract:

Negotiation is a specific form of interaction based on communication in which the parties enter into deliberately, each with clear but different interests or goals and a mutual dependency towards a decision due to be taken at the end of the confrontation. Consequently, negotiation is a complex activity involving many different disciplines from the strategic aspects and the decision making process to the evaluation of alternatives or outcomes and the exchange of information. While gender differences can be considered as one of the most researched topic within negotiation studies, empirical works and theory present many conflicting evidences and results about the role of gender in the process or the outcome. Furthermore, little interest has been shown over gender differences in the definition of what is negotiation, its essence or fundamental elements. Or, as differences exist in practices, it might be essential to study if the starting point of these discrepancies does not come from different considerations about what is negotiation and what will encourage the participants in their strategic decisions. Some recent and promising experiments made with diverse groups show that male and female participants in a common and shared situation barely consider the same way the concepts of power, trust or stakes which are largely considered as the usual driving forces of any negotiation. Furthermore, results from Human Resource self-assessment tests display and confirm considerable differences between individuals regarding essential behavioral dimensions like capacity to improvise and to achieve, aptitude to conciliate or to compete and orientation towards power and group domination which are also part of negotiation skills. Our intention in this paper is to confront these dimensions with negotiation’s usual driving forces in order to build up new paths for further research.

Keywords: Gender, negotiation, personality, power, stakes, trust.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3293
3629 Automated Textile Defect Recognition System Using Computer Vision and Artificial Neural Networks

Authors: Atiqul Islam, Shamim Akhter, Tumnun E. Mursalin

Abstract:

Least Development Countries (LDC) like Bangladesh, whose 25% revenue earning is achieved from Textile export, requires producing less defective textile for minimizing production cost and time. Inspection processes done on these industries are mostly manual and time consuming. To reduce error on identifying fabric defects requires more automotive and accurate inspection process. Considering this lacking, this research implements a Textile Defect Recognizer which uses computer vision methodology with the combination of multi-layer neural networks to identify four classifications of textile defects. The recognizer, suitable for LDC countries, identifies the fabric defects within economical cost and produces less error prone inspection system in real time. In order to generate input set for the neural network, primarily the recognizer captures digital fabric images by image acquisition device and converts the RGB images into binary images by restoration process and local threshold techniques. Later, the output of the processed image, the area of the faulty portion, the number of objects of the image and the sharp factor of the image, are feed backed as an input layer to the neural network which uses back propagation algorithm to compute the weighted factors and generates the desired classifications of defects as an output.

Keywords: Computer vision, image acquisition device, machine vision, multi-layer neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3270
3628 A Study of the Planning and Designing of the Built Environment under the Green Transit-Oriented Development

Authors: Wann-Ming Wey

Abstract:

In recent years, the problems of global climate change and natural disasters have induced the concerns and attentions of environmental sustainability issues for the public. Aside from the environmental planning efforts done for human environment, Transit-Oriented Development (TOD) has been widely used as one of the future solutions for the sustainable city development. In order to be more consistent with the urban sustainable development, the development of the built environment planning based on the concept of Green TOD which combines both TOD and Green Urbanism is adapted here. The connotation of the urban development under the green TOD including the design toward environment protect, the maximum enhancement resources and the efficiency of energy use, use technology to construct green buildings and protected areas, natural ecosystems and communities linked, etc. Green TOD is not only to provide the solution to urban traffic problems, but to direct more sustainable and greener consideration for future urban development planning and design. In this study, we use both the TOD and Green Urbanism concepts to proceed to the study of the built environment planning and design. Fuzzy Delphi Technique (FDT) is utilized to screen suitable criteria of the green TOD. Furthermore, Fuzzy Analytic Network Process (FANP) and Quality Function Deployment (QFD) were then developed to evaluate the criteria and prioritize the alternatives. The study results can be regarded as the future guidelines of the built environment planning and designing under green TOD development in Taiwan.

Keywords: Green transit-oriented development, built environment, fuzzy Delphi technique, quality function deployment, fuzzy analytic network process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1500
3627 Development of a Standardization Methodology Assessing the Comfort Performance for Hanok

Authors: Mi-Hyang Lee, Seung-Hoon Han

Abstract:

Korean traditional residences have been built with deep design issues for various values such as social, cultural, and environmental influences to be started from a few thousand years ago, but its meaning is being vanished due to the different lifestyles these days. It is necessary, therefore, to grasp the meaning of the Korea traditional building called Hanok and to get Korean people understand its real advantages. The purpose of this study is to propose a standardization methodology for evaluating comfort features towards Korean traditional houses. This paper is also trying to build an official standard evaluation system and to integrate aesthetic and psychological values induced from Hanok. Its comfort performance values could be divided into two large categories that are physical and psychological, and fourteen methods have been defined as the Korean Standards (KS). For this research, field survey data from representative Hanok types were collected for each method. This study also contains a qualitative in-depth analysis of the Hanok comfort index by the professions using AHP (Analytical Hierarchy Process) and has examined the effect of the methods. As a result, this paper could define what methods can provide trustful outcomes and how to evaluate the own strengths in aspects of spatial comfort of Hanok using suggested procedures towards the spatial configuration of the traditional dwellings. This study has finally proposed an integrated development of a standardization methodology assessing the comfort performance for Korean traditional residences, and it is expected that they could evaluate inhabitants of the residents and interior environmental conditions especially structured by wood materials like Hanok.

Keywords: Hanok, comfort performance, human condition, analytical hierarchy process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 888
3626 Systems Engineering Management Using Transdisciplinary Quality System Development Lifecycle Model

Authors: Mohamed Asaad Abdelrazek, Amir Taher El-Sheikh, M. Zayan, A.M. Elhady

Abstract:

The successful realization of complex systems is dependent not only on the technology issues and the process for implementing them, but on the management issues as well. Managing the systems development lifecycle requires technical management. Systems engineering management is the technical management. Systems engineering management is accomplished by incorporating many activities. The three major activities are development phasing, systems engineering process and lifecycle integration. Systems engineering management activities are performed across the system development lifecycle. Due to the ever-increasing complexity of systems as well the difficulty of managing and tracking the development activities, new ways to achieve systems engineering management activities are required. This paper presents a systematic approach used as a design management tool applied across systems engineering management roles. In this approach, Transdisciplinary System Development Lifecycle (TSDL) Model has been modified and integrated with Quality Function Deployment. Hereinafter, the name of the systematic approach is the Transdisciplinary Quality System Development Lifecycle (TQSDL) Model. The QFD translates the voice of customers (VOC) into measurable technical characteristics. The modified TSDL model is based on Axiomatic Design developed by Suh which is applicable to all designs: products, processes, systems and organizations. The TQSDL model aims to provide a robust structure and systematic thinking to support the implementation of systems engineering management roles. This approach ensures that the customer requirements are fulfilled as well as satisfies all the systems engineering manager roles and activities.

Keywords: Axiomatic design, quality function deployment, systems engineering management, system development lifecycle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1731
3625 Multi Response Optimization in Drilling Al6063/SiC/15% Metal Matrix Composite

Authors: Hari Singh, Abhishek Kamboj, Sudhir Kumar

Abstract:

This investigation proposes a grey-based Taguchi method to solve the multi-response problems. The grey-based Taguchi method is based on the Taguchi’s design of experimental method, and adopts grey relational analysis (GRA) to transfer multi-response problems into single-response problems. In this investigation, an attempt has been made to optimize the drilling process parameters considering weighted output response characteristics using grey relational analysis. The output response characteristics considered are surface roughness, burr height and hole diameter error under the experimental conditions of cutting speed, feed rate, step angle, and cutting environment. The drilling experiments were conducted using L27 orthogonal array. A combination of orthogonal array, design of experiments and grey relational analysis was used to ascertain best possible drilling process parameters that give minimum surface roughness, burr height and hole diameter error. The results reveal that combination of Taguchi design of experiment and grey relational analysis improves surface quality of drilled hole. 

Keywords: Metal matrix composite, Drilling, Optimization, step drill, Surface roughness, burr height, hole diameter error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3227
3624 Generating State-Based Testing Models for Object-Oriented Framework Interface Classes

Authors: Jehad Al Dallal, Paul Sorenson

Abstract:

An application framework provides a reusable design and implementation for a family of software systems. Application developers extend the framework to build their particular applications using hooks. Hooks are the places identified to show how to use and customize the framework. Hooks define the Framework Interface Classes (FICs) and the specifications of their methods. As part of the development life cycle, it is required to test the implementations of the FICs. Building a testing model to express the behavior of a class is an essential step for the generation of the class-based test cases. The testing model has to be consistent with the specifications provided for the hooks. State-based models consisting of states and transitions are testing models well suited to objectoriented software. Typically, hand-construction of a state-based model of a class behavior is expensive, error-prone, and may result in constructing an inconsistent model with the specifications of the class methods, which misleads verification results. In this paper, a technique is introduced to automatically synthesize a state-based testing model for FICs using the specifications provided for the hooks. A tool that supports the proposed technique is introduced.

Keywords: Framework interface classes, hooks, state-basedtesting, testing model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1214
3623 Cantilever Shoring Piles with Prestressing Strands: An Experimental Approach

Authors: Hani Mekdash, Lina Jaber, Yehia Temsah

Abstract:

Underground space is becoming a necessity nowadays, especially in highly congested urban areas. Retaining underground excavations using shoring systems is essential in order to protect adjoining structures from potential damage or collapse. Reinforced Concrete Piles (RCP) supported by multiple rows of tie-back anchors are commonly used type of shoring systems in deep excavations. However, executing anchors can sometimes be challenging because they might illegally trespass neighboring properties or get obstructed by infrastructure and other underground facilities. A technique is proposed in this paper, and it involves the addition of eccentric high-strength steel strands to the RCP section through ducts without providing the pile with lateral supports. The strands are then vertically stressed externally on the pile cap using a hydraulic jack, creating a compressive strengthening force in the concrete section. An experimental study about the behavior of the shoring wall by pre-stressed piles is presented during the execution of an open excavation in an urban area (Beirut city) followed by numerical analysis using finite element software. Based on the experimental results, this technique is proven to be cost-effective and provides flexible and sustainable construction of shoring works.

Keywords: Excavation, inclinometer, prestressing, shoring system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 487
3622 Mixtures of Monotone Networks for Prediction

Authors: Marina Velikova, Hennie Daniels, Ad Feelders

Abstract:

In many data mining applications, it is a priori known that the target function should satisfy certain constraints imposed by, for example, economic theory or a human-decision maker. In this paper we consider partially monotone prediction problems, where the target variable depends monotonically on some of the input variables but not on all. We propose a novel method to construct prediction models, where monotone dependences with respect to some of the input variables are preserved by virtue of construction. Our method belongs to the class of mixture models. The basic idea is to convolute monotone neural networks with weight (kernel) functions to make predictions. By using simulation and real case studies, we demonstrate the application of our method. To obtain sound assessment for the performance of our approach, we use standard neural networks with weight decay and partially monotone linear models as benchmark methods for comparison. The results show that our approach outperforms partially monotone linear models in terms of accuracy. Furthermore, the incorporation of partial monotonicity constraints not only leads to models that are in accordance with the decision maker's expertise, but also reduces considerably the model variance in comparison to standard neural networks with weight decay.

Keywords: mixture models, monotone neural networks, partially monotone models, partially monotone problems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1227
3621 Catalytic Decomposition of Potassium Monopersulfate. The Kinetics

Authors: Olga Gimeno, Javier Rivas, Maria Carbajo, Teresa Borralho

Abstract:

Potassium monopersulfate has been decomposed in aqueous solution in the presence of Co(II). The process has been simulated by means of a mechanism based on elementary reactions. Rate constants have been taken from literature reports or, alternatively, assimilated to analogous reactions occurring in Fenton's chemistry. Several operating conditions have been successfully applied.

Keywords: Monopersulfate, Oxone®, Sulfate radicals, Water treatment

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1948
3620 An Exploratory Case Study of the Interference of Erotic Transference in the Longevity of Psychoanalytic Treatment

Authors: M. Javid, R. Hassan, J. DeSilva

Abstract:

In this exploratory case study, a 37-year-old male patient who previously terminated treatment after four months of therapy with a different therapist begins anew with a 38-year-old female therapist and undergoes a similar cycle of premature termination, with added discourse caused by erotic transference. Process notes and records of the therapy treatment indicate that during the short course of treatment, the patient explored his difficulties navigating personal relationships, both current and past, and his difficulties coping with hypochondriasis. The therapist becomes tasked with not only navigating the patient’s inner conflict but also how she relates to the patient in the countertransference process while maintaining professional boundaries. This includes empathizing with the patient while also experiencing discomfort in the erotic transference from a professional standpoint. When the patient terminates once more, the therapist reflects on the possible reasons for termination. This includes the patient’s difficulties with tolerating interpretations, which cause him to blame himself for past events. These interpretations were also very frequent, contributing to the emotional burden the patient experienced. The therapist reflected on the use of interpretation versus exploration of the patient’s feelings and how exploring his feelings, including his feelings towards her, would have allowed for an opportunity to explore the emotions that troubled him more deeply. This includes exploring the patient’s anger and fear, which stem from unresolved conflicts from his childhood. Moreover, the erotic transference served as an enactment of previous experiences in which the patient feared losing what he loved, leading him to opt for premature termination instead of losing his ability to control the relationship and experience loss.

Keywords: Countertransference, erotic transference, premature termination, therapist-client boundaries, transference.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 45