Search results for: Multi Objective Programming.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3817

Search results for: Multi Objective Programming.

2347 Numerical Study of Fatigue Crack Growth at a Web Stiffener of Ship Structural Details

Authors: Wentao He, Jingxi Liu, De Xie

Abstract:

It is necessary to manage the fatigue crack growth (FCG) once those cracks are detected during in-service inspections. In this paper, a simulation program (FCG-System) is developed utilizing the commercial software ABAQUS with its object-oriented programming interface to simulate the fatigue crack path and to compute the corresponding fatigue life. In order to apply FCG-System in large-scale marine structures, the substructure modeling technique is integrated in the system under the consideration of structural details and load shedding during crack growth. Based on the nodal forces and nodal displacements obtained from finite element analysis, a formula for shell elements to compute stress intensity factors is proposed in the view of virtual crack closure technique. The cracks initiating from the intersection of flange and the end of the web-stiffener are investigated for fatigue crack paths and growth lives under water pressure loading and axial force loading, separately. It is found that the FCG-System developed by authors could be an efficient tool to perform fatigue crack growth analysis on marine structures.

Keywords: Crack path, Fatigue crack, Fatigue live, FCG-System, Virtual crack closure technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2465
2346 Utilizing Virtual Worlds in Education: The Implications for Practice

Authors: Teresa Coffman, Mary Beth Klinger

Abstract:

Multi User Virtual Worlds are becoming a valuable educational tool. Learning experiences within these worlds focus on discovery and active experiences that both engage students and motivate them to explore new concepts. As educators, we need to explore these environments to determine how they can most effectively be used in our instructional practices. This paper explores the current application of virtual worlds to identify meaningful educational strategies that are being used to engage students and enhance teaching and learning.

Keywords: Virtual Environments, MUVEs, Constructivist, Distance Learning, Learner Centered.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1849
2345 Simulation of Reactive Distillation: Comparison of Equilibrium and Nonequilibrium Stage Models

Authors: Asfaw Gezae Daful

Abstract:

In the present study, two distinctly different approaches are followed for modeling of reactive distillation column, the equilibrium stage model and the nonequilibrium stage model. These models are simulated with a computer code developed in the present study using MATLAB programming. In the equilibrium stage models, the vapor and liquid phases are assumed to be in equilibrium and allowance is made for finite reaction rates, where as in the nonequilibrium stage models simultaneous mass transfer and reaction rates are considered. These simulated model results are validated from the experimental data reported in the literature. The simulated results of equilibrium and nonequilibrium models are compared for concentration, temperature and reaction rate profiles in a reactive distillation column for Methyl Tert Butyle Ether (MTBE) production. Both the models show similar trend for the concentration, temperature and reaction rate profiles but the nonequilibrium model predictions are higher and closer to the experimental values reported in the literature.

Keywords: Reactive Distillation, Equilibrium model, Nonequilibrium model, Methyl Tert-Butyl Ether

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4180
2344 Tagged Grid Matching Based Object Detection in Wavelet Neural Network

Authors: R. Arulmurugan, P. Sengottuvelan

Abstract:

Object detection using Wavelet Neural Network (WNN) plays a major contribution in the analysis of image processing. Existing cluster-based algorithm for co-saliency object detection performs the work on the multiple images. The co-saliency detection results are not desirable to handle the multi scale image objects in WNN. Existing Super Resolution (SR) scheme for landmark images identifies the corresponding regions in the images and reduces the mismatching rate. But the Structure-aware matching criterion is not paying attention to detect multiple regions in SR images and fail to enhance the result percentage of object detection. To detect the objects in the high-resolution remote sensing images, Tagged Grid Matching (TGM) technique is proposed in this paper. TGM technique consists of the three main components such as object determination, object searching and object verification in WNN. Initially, object determination in TGM technique specifies the position and size of objects in the current image. The specification of the position and size using the hierarchical grid easily determines the multiple objects. Second component, object searching in TGM technique is carried out using the cross-point searching. The cross out searching point of the objects is selected to faster the searching process and reduces the detection time. Final component performs the object verification process in TGM technique for identifying (i.e.,) detecting the dissimilarity of objects in the current frame. The verification process matches the search result grid points with the stored grid points to easily detect the objects using the Gabor wavelet Transform. The implementation of TGM technique offers a significant improvement on the multi-object detection rate, processing time, precision factor and detection accuracy level.

Keywords: Object Detection, Cross-point Searching, Wavelet Neural Network, Object Determination, Gabor Wavelet Transform, Tagged Grid Matching.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1948
2343 Numerical Modeling of Determination of in situ Rock Mass Deformation Modulus Using the Plate Load Test

Authors: A. Khodabakhshi, A. Mortazavi

Abstract:

Accurate determination of rock mass deformation modulus, as an important design parameter, is one of the most controversial issues in most engineering projects. A 3D numerical model of standard plate load test (PLT) using the FLAC3D code was carried to investigate the mechanism governing the test process. Five objectives were the focus of this study. The first goal was to employ 3D modeling in the interpretation of PLT conducted at the Bazoft dam site, Iran. The second objective was to investigate the effect of displacements measuring depth from the loading plates on the calculated moduli. The magnitude of rock mass deformation modulus calculated from PLT depends on anchor depth, and in practice, this may be a cause of error in the selection of realistic deformation modulus for the rock mass. The third goal of the study was to investigate the effect of testing plate diameter on the calculated modulus. Moreover, a comparison of the calculated modulus from ISRM formula, numerical modeling and calculated modulus from the actual PLT carried out at right abutment of the Bazoft dam site was another objective of the study. Finally, the effect of plastic strains on the calculated moduli in each of the loading-unloading cycles for three loading plates was investigated. The geometry, material properties, and boundary conditions on the constructed 3D model were selected based on the in-situ conditions of PLT at Bazoft dam site. A good agreement was achieved between numerical model results and the field tests results.

Keywords: Deformation modulus, numerical model, plate loading test, rock mass.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 739
2342 Optimizing Logistics for Courier Organizations with Considerations of Congestions and Pickups: A Courier Delivery System in Amman as Case Study

Authors: Nader A. Al Theeb, Zaid Abu Manneh, Ibrahim Al-Qadi

Abstract:

Traveling salesman problem (TSP) is a combinatorial integer optimization problem that asks "What is the optimal route for a vehicle to traverse in order to deliver requests to a given set of customers?”. It is widely used by the package carrier companies’ distribution centers. The main goal of applying the TSP in courier organizations is to minimize the time that it takes for the courier in each trip to deliver or pick up the shipments during a day. In this article, an optimization model is constructed to create a new TSP variant to optimize the routing in a courier organization with a consideration of congestion in Amman, the capital of Jordan. Real data were collected by different methods and analyzed. Then, concert technology - CPLEX was used to solve the proposed model for some random generated data instances and for the real collected data. At the end, results have shown a great improvement in time compared with the current trip times, and an economic study was conducted afterwards to figure out the impact of using such models.

Keywords: Travel salesman problem, congestions, pick-up, integer programming, package carriers, service engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 917
2341 A Perceptually Optimized Wavelet Embedded Zero Tree Image Coder

Authors: A. Bajit, M. Nahid, A. Tamtaoui, E. H. Bouyakhf

Abstract:

In this paper, we propose a Perceptually Optimized Embedded ZeroTree Image Coder (POEZIC) that introduces a perceptual weighting to wavelet transform coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to the coding quality obtained using the SPIHT algorithm only. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEZIC quality assessment. Our POEZIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) luminance masking and Contrast masking, 2) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting, 3) the Wavelet Error Sensitivity WES used to reduce the perceptual quantization errors. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.

Keywords: DWT, linear-phase 9/7 filter, 9/7 Wavelets Error Sensitivity WES, CSF implementation approaches, JND Just Noticeable Difference, Luminance masking, Contrast masking, standard SPIHT, Objective Quality Measure, Probability Score PS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2029
2340 New Approach for Minimizing Wavelength Fragmentation in Wavelength-Routed WDM Networks

Authors: Sami Baraketi, Jean-Marie Garcia, Olivier Brun

Abstract:

Wavelength Division Multiplexing (WDM) is the dominant transport technology used in numerous high capacity backbone networks, based on optical infrastructures. Given the importance of costs (CapEx and OpEx) associated to these networks, resource management is becoming increasingly important, especially how the optical circuits, called “lightpaths”, are routed throughout the network. This requires the use of efficient algorithms which provide routing strategies with the lowest cost. We focus on the lightpath routing and wavelength assignment problem, known as the RWA problem, while optimizing wavelength fragmentation over the network. Wavelength fragmentation poses a serious challenge for network operators since it leads to the misuse of the wavelength spectrum, and then to the refusal of new lightpath requests. In this paper, we first establish a new Integer Linear Program (ILP) for the problem based on a node-link formulation. This formulation is based on a multilayer approach where the original network is decomposed into several network layers, each corresponding to a wavelength. Furthermore, we propose an efficient heuristic for the problem based on a greedy algorithm followed by a post-treatment procedure. The obtained results show that the optimal solution is often reached. We also compare our results with those of other RWA heuristic methods

Keywords: WDM, lightpath, RWA, wavelength fragmentation, optimization, linear programming, heuristic

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1853
2339 Development of Basic Patternmaking Using Parametric Modelling and AutoLISP

Authors: Haziyah Hussin, Syazwan Abdul Samad, Rosnani Jusoh

Abstract:

This study is aimed towards the automisation of basic patternmaking for traditional clothes for the purpose of mass production using AutoCAD to apply AutoLISP feature under software Hazi Attire. A standard dress form (industrial form) with the size of small (S), medium (M) and large (L) size is measured using full body scanning machine. Later, the pattern for the clothes is designed parametrically based on the measured dress form. Hazi Attire program is used within the framework of AutoCAD to generate the basic pattern of front bodice, back bodice, front skirt, back skirt and sleeve block (sloper). The generation of pattern is based on the parameters inputted by user, whereby in this study, the parameters were determined based on the measured size of dress form. The finalized pattern parameter shows that the pattern fit perfectly on the dress form. Since the pattern is generated almost instantly, these proved that using the AutoLISP programming, the manufacturing lead time for the mass production of the traditional clothes can be decreased.

Keywords: Apparel, AutoLISP, Malay Traditional Clothes, Pattern Ganeration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2354
2338 Fuel Cell/DC-DC Convertor Control by Sliding Mode Method

Authors: Farzad Abdous

Abstract:

Fuel cell's system requires regulating circuit for voltage and current in order to control power in case of connecting to other generative devices or load. In this paper Fuel cell system and convertor, which is a multi-variable system, are controlled using sliding mode method. Use of weighting matrix in design procedure made it possible to regulate speed of control. Simulation results show the robustness and accuracy of proposed controller for controlling desired of outputs.

Keywords: DC-DC converter, Fuel cell, PEM, Slides mode control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1580
2337 Modeling of Co-Cu Elution From Clinoptilolite using Neural Network

Authors: John Kabuba, Antoine Mulaba-Bafubiandi

Abstract:

The elution process for the removal of Co and Cu from clinoptilolite as an ion-exchanger was investigated using three parameters: bed volume, pH and contact time. The present paper study has shown quantitatively that acid concentration has a significant effect on the elution process. The favorable eluant concentration was found to be 2 M HCl and 2 M H2SO4, respectively. The multi-component equilibrium relationship in the process can be very complex, and perhaps ill-defined. In such circumstances, it is preferable to use a non-parametric technique such as Neural Network to represent such an equilibrium relationship.

Keywords: Clinoptilolite, elution, modeling, neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1401
2336 Production of Biocomposites Using Chars Obtained by Co-Pyrolysis of Olive Pomace with Plastic Wastes

Authors: Esra Yel, Tabriz Aslanov, Merve Sogancioglu, Suheyla Kocaman, Gulnare Ahmetli

Abstract:

The disposal of waste plastics has become a major worldwide environmental problem. Pyrolysis of waste plastics is one of the routes to waste minimization and recycling that has been gaining interest. In pyrolysis, the pyrolysed material is separated into gas, liquid (both are fuel) and solid (char) products. All fractions have utilities and economical value depending upon their characteristics. The first objective of this study is to determine the co-pyrolysis product fractions of waste HDPE- (high density polyethylene) and LDPE (low density polyethylene)-olive pomace (OP) and to determine the qualities of the solid product char. Chars obtained at 700 °C pyrolysis were used in biocomposite preparation as additive. As the second objective, the effects of char on biocomposite quality were investigated. Pyrolysis runs were performed at temperature 700 °C with heating rates of 5 °C/min. Biocomposites were prepared by mixing of chars with bisphenol-F type epoxy resin in various wt%. Biocomposite properties were determined by measuring electrical conductivity, surface hardness, Young’s modulus and tensile strength of the composites. The best electrical conductivity results were obtained with HDPE-OP char. For HDPE-OP char and LDPE-OP char, compared to neat epoxy, the tensile strength values of the composites increased by 102% and 78%, respectively, at 10% char dose. The hardness measurements showed similar results to the tensile tests, since there is a correlation between the hardness and the tensile strength.

Keywords: Pyrolysis, olive pomace, char, biocomposite, PE plastics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1870
2335 Digital Filter for Cochlear Implant Implemented on a Field- Programmable Gate Array

Authors: Rekha V. Dundur , M.V.Latte, S.Y. Kulkarni, M.K.Venkatesha

Abstract:

The advent of multi-million gate Field Programmable Gate Arrays (FPGAs) with hardware support for multiplication opens an opportunity to recreate a significant portion of the front end of a human cochlea using this technology. In this paper we describe the implementation of the cochlear filter and show that it is entirely suited to a single device XC3S500 FPGA implementation .The filter gave a good fit to real time data with efficiency of hardware usage.

Keywords: Cochlea, FPGA, IIR (Infinite Impulse Response), Multiplier.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2293
2334 Irrigation Scheduling for Maize and Indian-mustard based on Daily Crop Water Requirement in a Semi- Arid Region

Authors: Vijay Shankar, C.S.P. Ojha, K.S. Hari Prasad

Abstract:

Maize and Indian mustard are significant crops in semi-arid climate zones of India. Improved water management requires precise scheduling of irrigation, which in turn requires an accurate computation of daily crop evapotranspiration (ETc). Daily crop evapotranspiration comes as a product of reference evapotranspiration (ET0) and the growth stage specific crop coefficients modified for daily variation. The first objective of present study is to develop crop coefficients Kc for Maize and Indian mustard. The estimated values of Kc for maize at the four crop growth stages (initial, development, mid-season, and late season) are 0.55, 1.08, 1.25, and 0.75, respectively, and for Indian mustard the Kc values at the four growth stages are 0.3, 0.6, 1.12, and 0.35, respectively. The second objective of the study is to compute daily crop evapotranspiration from ET0 and crop coefficients. Average daily ETc of maize varied from about 2.5 mm/d in the early growing period to > 6.5 mm/d at mid season. The peak ETc of maize is 8.3 mm/d and it occurred 64 days after sowing at the reproductive growth stage when leaf area index was 4.54. In the case of Indian mustard, average ETc is 1 mm/d at the initial stage, >1.8 mm/d at mid season and achieves a peak value of 2.12 mm/d on 56 days after sowing. Improved schedules of irrigation have been simulated based on daily crop evapo-transpiration and field measured data. Simulation shows a close match between modeled and field moisture status prevalent during crop season.

Keywords: Crop coefficient, Crop evapotranspiration, Field moisture, Irrigation Scheduling

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3305
2333 Working Children and Adolescents and the Vicious Circle of Poverty from the Perspective of Gunnar Myrdal’s Theory of Circular Cumulative Causation: Analysis and Implementation of a Probit Model to Brazil

Authors: J. Leige Lopes, L. Aparecida Bastos, R. Monteiro da Silva

Abstract:

The objective of this paper is to study the work of children and adolescents and the vicious circle of poverty from the perspective of Guinar Myrdal’s Theory of Circular Cumulative Causation. The objective is to show that if a person starts working in the juvenile phase of life they will be classified as poor or extremely poor when they are adult, which can to be observed in the case of Brazil, more specifically in the north and northeast. To do this, the methodology used was statistical and econometric analysis by applying a probit model. The main results show that: if people reside in the northeastern region of Brazil, and if they have a low educational level and if they start their professional life before the age 18, they will increase the likelihood that they will be poor or extremely poor. There is a consensus in the literature that one of the causes of the intergenerational transmission of poverty is related to child labor, this because when one starts their professional life while still in the toddler or adolescence stages of life, they end up sacrificing their studies. Because of their low level of education, children or adolescents are forced to perform low-paid functions and abandon school, becoming in the future, people who will be classified as poor or extremely poor. As a result of poverty, parents may be forced to send their children out to work when they are young, so that in the future they will also become poor adults, a process that is characterized as the "vicious circle of poverty."

Keywords: Children, adolescents, Gunnar Myrdal, poverty, vicious circle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1665
2332 Coupling Time-Domain Analysis for Dynamic Positioning during S-Lay Installation

Authors: Sun Li-ping, Zhu Jian-xun, Liu Sheng-nan

Abstract:

In order to study the performance of dynamic positioning system during S-lay operations, dynamic positioning system is simulated with the hull-stinger-pipe coupling effect. The roller of stinger is simulated by the generalized elastic contact theory. The stinger is composed of Morrison members. Force on pipe is calculated by lumped mass method. Time domain of fully coupled barge model is analyzed combining with PID controller, Kalman filter and allocation of thrust using Sequential Quadratic Programming method. It is also analyzed that the effect of hull wave frequency motion on pipe-stinger coupling force and dynamic positioning system. Besides, it is studied that how S-lay operations affect the dynamic positioning accuracy. The simulation results are proved to be available by checking pipe stress with API criterion. The effect of heave and yaw motion cannot be ignored on hull-stinger-pipe coupling force and dynamic positioning system. It is important to decrease the barge’s pitch motion and lay pipe in head sea in order to improve safety of the S-lay installation and dynamic positioning.

Keywords: S-lay operation, dynamic positioning, coupling motion; time domain, allocation of thrust.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2755
2331 Evaluation of Optimum Performance of Lateral Intakes

Authors: Mohammad Reza Pirestani, Hamid Reza Vosoghifar, Pegah Jazayeri

Abstract:

In designing river intakes and diversion structures, it is paramount that the sediments entering the intake are minimized or, if possible, completely separated. Due to high water velocity, sediments can significantly damage hydraulic structures especially when mechanical equipment like pumps and turbines are used. This subsequently results in wasting water, electricity and further costs. Therefore, it is prudent to investigate and analyze the performance of lateral intakes affected by sediment control structures. Laboratory experiments, despite their vast potential and benefits, can face certain limitations and challenges. Some of these include: limitations in equipment and facilities, space constraints, equipment errors including lack of adequate precision or mal-operation, and finally, human error. Research has shown that in order to achieve the ultimate goal of intake structure design – which is to design longlasting and proficient structures – the best combination of sediment control structures (such as sill and submerged vanes) along with parameters that increase their performance (such as diversion angle and location) should be determined. Cost, difficulty of execution and environmental impacts should also be included in evaluating the optimal design. This solution can then be applied to similar problems in the future. Subsequently, the model used to arrive at the optimal design requires high level of accuracy and precision in order to avoid improper design and execution of projects. Process of creating and executing the design should be as comprehensive and applicable as possible. Therefore, it is important that influential parameters and vital criteria is fully understood and applied at all stages of choosing the optimal design. In this article, influential parameters on optimal performance of the intake, advantages and disadvantages, and efficiency of a given design are studied. Then, a multi-criterion decision matrix is utilized to choose the optimal model that can be used to determine the proper parameters in constructing the intake.

Keywords: Diversion structures lateral intake, multi criteria decision making, optimal design, sediment control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2201
2330 Development of an Automated Quality Management System to Control District Heating

Authors: Nigina Toktasynova, Sholpan Sagyndykova, Zhanat Kenzhebayeva, Maksat Kalimoldayev, Mariya Ishimova, Irbulat Utepbergenov

Abstract:

To solve these problems, we investigated the management system of heating enterprise, including strategic planning based on the balanced scorecard (BSC), quality management in accordance with the standards of the Quality Management System (QMS) ISO 9001 and analysis of the system based on expert judgment using fuzzy inference. To carry out our work we used the theory of fuzzy sets, the QMS in accordance with ISO 9001, BSC, method of construction of business processes according to the notation IDEF0, theory of modeling using Matlab software simulation tools and graphical programming LabVIEW. The results of the work are as follows: We determined possibilities of improving the management of heat-supply plant-based on QMS; after the justification and adaptation of software tool it has been used to automate a series of functions for the management and reduction of resources and for the maintenance of the system up to date; an application for the analysis of the QMS based on fuzzy inference has been created with novel organization of communication software with the application enabling the analysis of relevant data of enterprise management system. 

Keywords: Balanced scorecard, heat supply, quality management system, the theory of fuzzy sets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1760
2329 Truck Routing Problem Considering Platooning and Drivers’ Breaks

Authors: Xiaoyuan Yan, Min Xu

Abstract:

Truck platooning refers to a convoy of digitally connected automated trucks traveling safely with a small inter-vehicle gap. It has been identified as one of the most promising and applicable technologies towards automated and sustainable freight transportation. Although truck platooning delivers significant energy-saving benefits, it cannot be realized without good coordination of drivers’ shifts to lead the platoons subject to their mandatory breaks. Therefore, this study aims to route a fleet of trucks to their destinations using the least amount of fuel by maximizing platoon opportunities under the regulations of drivers’ mandatory breaks. We formulate this platoon coordination problem as a mixed-integer linear programming problem and solve it by CPLEX. Numerical experiments are conducted to demonstrate the effectiveness and efficiency of our proposed model. In addition, we also explore the impacts of drivers’ compulsory breaks on the fuel-savings performance. The results show a slight increase in the total fuel costs in the presence of drivers’ compulsory breaks, thanks to driving-while-resting benefit provided for the trailing trucks. This study may serve as a guide for the operators of automated freight transportation.

Keywords: Truck platooning, route optimization, compulsory breaks, energy saving.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 568
2328 Organizational Strategy for Technology Convergence

Authors: Seongykyoon Jeong, Sungki Lee, Jaeyun Kim, Seunghun Oh, Kiho Kwak

Abstract:

The purpose of this article is to identify the practical strategies of R&D (research and development) entities for developing converging technology in organizational context. Based on the multi-assignation technological domains of patents derived from entire government-supported R&D projects for 13 years, we find that technology convergence is likely to occur when a university solely develops technology or when university develops technology as one of the collaborators. These results reflect the important role of universities in developing converging technology

Keywords: Interdisciplinary, Research and development strategy, Technology convergence

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1907
2327 Blood Lymphocyte and Neutrophil Response of Cultured Rainbow Trout, Oncorhynchus mykiss, Administered Varying Dosages of an Oral Immunomodulator – ‘Fin-Immune™’

Authors: Duane Barker, John Holliday

Abstract:

In a 10-week (May – August, 2008) Phase I trial, 840, 1+ rainbow trout, Oncorhynchus mykiss, received a commercial oral immunomodulator, Fin Immune™, at four different dosages (0, 10, 20 and 30 mg g-1) to evaluate immune response and growth. The overall objective of was to determine an optimal dosage of this product for rainbow trout that provides enhanced immunity with maximal growth and health. Biweekly blood samples were taken from 10 randomly selected fish in each tank (30 samples per treatment) to evaluate the duration of enhanced immunity conferred by Fin-Immune™. The immunological assessment included serum white blood cell (lymphocyte, neutrophil) densities and blood hematocrit (packed cell volume %). Of these three variables, only lymphocyte density increased significantly among trout fed Fin- Immune™ at 20 and 30 mg g-1 which peaked at week 6. At week 7, all trout were switched to regular feed (lacking Fin-Immune™) and by week 10, lymphocyte levels decreased among all levels but were still greater than at week 0. There was growth impairment at the highest dose of Fin-Immune™ tested (30 mg g-1) which can be associated with a physiological compensatory mechanism due to a dose-specific threshold level. Thus, our main objective of this Phase I study was achieved, the 20 mg g-1 dose of Fin-Immune™ should be the most efficacious (of those we tested) to use for a Phase II disease challenge trial.

Keywords: Blood Lymphocyte, Neutrophil Response of Cultured Rainbow Trout, Oncorhynchus mykiss, Oral Immunomodulator – 'Fin-ImmuneTM'.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1490
2326 Two Approaches to Code Mobility in an Agent-based E-commerce System

Authors: Costin Badica, Maria Ganzha, Marcin Paprzycki

Abstract:

Recently, a model multi-agent e-commerce system based on mobile buyer agents and transfer of strategy modules was proposed. In this paper a different approach to code mobility is introduced, where agent mobility is replaced by local agent creation supplemented by similar code mobility as in the original proposal. UML diagrams of agents involved in the new approach to mobility and the augmented system activity diagram are presented and discussed.

Keywords: Agent system, agent mobility, code mobility, e-commerce, UML formalization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1410
2325 Hybrid Approach for Country’s Performance Evaluation

Authors: C. Slim

Abstract:

This paper presents an integrated model, which hybridized data envelopment analysis (DEA) and support vector machine (SVM) together, to class countries according to their efficiency and performance. This model takes into account aspects of multi-dimensional indicators, decision-making hierarchy and relativity of measurement. Starting from a set of indicators of performance as exhaustive as possible, a process of successive aggregations has been developed to attain an overall evaluation of a country’s competitiveness.

Keywords: Artificial neural networks, support vector machine, data envelopment analysis, aggregations, indicators of performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1045
2324 Driver Readiness in Autonomous Vehicle Take-Overs

Authors: Abdurrahman Arslanyilmaz, Salman Al Matouq, Durmus V. Doner

Abstract:

Level 3 autonomous vehicles are able to take full responsibility over the control of the vehicle unless a system boundary is reached or a system failure occurs, in which case, the driver is expected to take-over the control of the vehicle. While this happens, the driver is often not aware of the traffic situation or is engaged in a secondary task. Factors affecting the duration and quality of take-overs in these situations have included secondary task type and nature, traffic density, take-over request (TOR) time, and TOR warning type and modality. However, to the best of the authors’ knowledge, no prior study examined time buffer for TORs when a system failure occurs immediately before intersections. The first objective of this study is to investigate the effect of time buffer (3 and 7 seconds) on the duration and quality of take-overs when a system failure occurs just prior to intersections. In addition, eye-tracking has become one of the most popular methods to report what individuals view, in what order, for how long, and how often, and it has been utilized in driving simulations with various objectives. However, to the extent of authors’ knowledge, none has compared drivers’ eye gaze behavior in the two different time buffers in order to examine drivers’ attention and comprehension of salient information. The second objective is to understand the driver’s attentional focus on comprehension of salient traffic-related information presented on different parts of the dashboard and on the roads.

Keywords: Autonomous vehicles, driving simulation, eye gaze, attention, comprehension, take-over duration, take-over quality, time buffer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 837
2323 A proposed High-Resolution Time-Frequency Distribution for the Analysis of Multicomponent and Speech Signals

Authors: D. Boutana, B. Barkat , F. Marir

Abstract:

In this paper, we propose a novel time-frequency distribution (TFD) for the analysis of multi-component signals. In particular, we use synthetic as well as real-life speech signals to prove the superiority of the proposed TFD in comparison to some existing ones. In the comparison, we consider the cross-terms suppression and the high energy concentration of the signal around its instantaneous frequency (IF).

Keywords: Cohen's Class, Multicomponent signal, SeparableKernel, Speech signal, Time- frequency resolution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1850
2322 Average Current Estimation Technique for Reliability Analysis of Multiple Semiconductor Interconnects

Authors: Ki-Young Kim, Jae-Ho Lim, Deok-Min Kim, Seok-Yoon Kim

Abstract:

Average current analysis checking the impact of current flow is very important to guarantee the reliability of semiconductor systems. As semiconductor process technologies improve, the coupling capacitance often become bigger than self capacitances. In this paper, we propose an analytic technique for analyzing average current on interconnects in multi-conductor structures. The proposed technique has shown to yield the acceptable errors compared to HSPICE results while providing computational efficiency.

Keywords: current moment, interconnect modeling, reliability analysis, worst-case switching

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1361
2321 Analyzing the Factors that Cause Parallel Performance Degradation in Parallel Graph-Based Computations Using Graph500

Authors: Mustafa Elfituri, Jonathan Cook

Abstract:

Recently, graph-based computations have become more important in large-scale scientific computing as they can provide a methodology to model many types of relations between independent objects. They are being actively used in fields as varied as biology, social networks, cybersecurity, and computer networks. At the same time, graph problems have some properties such as irregularity and poor locality that make their performance different than regular applications performance. Therefore, parallelizing graph algorithms is a hard and challenging task. Initial evidence is that standard computer architectures do not perform very well on graph algorithms. Little is known exactly what causes this. The Graph500 benchmark is a representative application for parallel graph-based computations, which have highly irregular data access and are driven more by traversing connected data than by computation. In this paper, we present results from analyzing the performance of various example implementations of Graph500, including a shared memory (OpenMP) version, a distributed (MPI) version, and a hybrid version. We measured and analyzed all the factors that affect its performance in order to identify possible changes that would improve its performance. Results are discussed in relation to what factors contribute to performance degradation.

Keywords: Graph computation, Graph500 benchmark, parallel architectures, parallel programming, workload characterization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 507
2320 Characterization of Indoor Power Lines as Data Communication Channels Experimental Details and Results

Authors: Sheroz Khan, A. F. Salami, W. A. Lawal, AHM Zahirul Alam, Shihab Abdel Hameed, M. J. E.Salami

Abstract:

In this paper, a multi-branch power line is modeled using ABCD matrix to show its worth as a communication channel. The model is simulated using MATLAB in an effort to investigate the effects of multiple loading, multipath, and those as a result of load mismatching. The channel transfer function is obtained and investigated using different cable lengths, and different number of bridge taps under given loading conditions.

Keywords: Power line Communication, Transfer Function, Channel Modeling, Signal Transmission.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1915
2319 Ubiquitous Life People Informatics Engine (U-Life PIE): Wearable Health Promotion System

Authors: Yi-Ping Lo, Shi-Yao Wei, Chih-Chun Ma

Abstract:

Since Google launched Google Glass in 2012, numbers of commercial wearable devices were released, such as smart belt, smart band, smart shoes, smart clothes ... etc. However, most of these devices perform as sensors to show the readings of measurements and few of them provide the interactive feedback to the user. Furthermore, these devices are single task devices which are not able to communicate with each other. In this paper a new health promotion system, Ubiquitous Life People Informatics Engine (U-Life PIE), will be presented. This engine consists of People Informatics Engine (PIE) and the interactive user interface. PIE collects all the data from the compatible devices, analyzes this data comprehensively and communicates between devices via various application programming interfaces. All the data and informations are stored on the PIE unit, therefore, the user is able to view the instant and historical data on their mobile devices any time. It also provides the real-time hands-free feedback and instructions through the user interface visually, acoustically and tactilely. These feedback and instructions suggest the user to adjust their posture or habits in order to avoid the physical injuries and prevent illness.

Keywords: Machine learning, user interface, user experience, Internet of things, health promotion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1401
2318 Infrastructure Change Monitoring Using Multitemporal Multispectral Satellite Images

Authors: U. Datta

Abstract:

The main objective of this study is to find a suitable approach to monitor the land infrastructure growth over a period of time using multispectral satellite images. Bi-temporal change detection method is unable to indicate the continuous change occurring over a long period of time. To achieve this objective, the approach used here estimates a statistical model from series of multispectral image data over a long period of time, assuming there is no considerable change during that time period and then compare it with the multispectral image data obtained at a later time. The change is estimated pixel-wise. Statistical composite hypothesis technique is used for estimating pixel based change detection in a defined region. The generalized likelihood ratio test (GLRT) is used to detect the changed pixel from probabilistic estimated model of the corresponding pixel. The changed pixel is detected assuming that the images have been co-registered prior to estimation. To minimize error due to co-registration, 8-neighborhood pixels around the pixel under test are also considered. The multispectral images from Sentinel-2 and Landsat-8 from 2015 to 2018 are used for this purpose. There are different challenges in this method. First and foremost challenge is to get quite a large number of datasets for multivariate distribution modelling. A large number of images are always discarded due to cloud coverage. Due to imperfect modelling there will be high probability of false alarm. Overall conclusion that can be drawn from this work is that the probabilistic method described in this paper has given some promising results, which need to be pursued further.

Keywords: Co-registration, GLRT, infrastructure growth, multispectral, multitemporal, pixel-based change detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 683