Search results for: Application of RFID in cars
384 Topographical Image Transference Compatibility Generated Through Moiré Technique Applying Parametrical Softwares of Computer Assisted Design
Authors: M. V. G. Silva, J. Gazzola, I. M. Dal Fabbro, A. C. L. Lino
Abstract:
Computer aided design accounts with the support of parametric software in the design of machine components as well as of any other pieces of interest. The complexities of the element under study sometimes offer certain difficulties to computer design, or ever might generate mistakes in the final body conception. Reverse engineering techniques are based on the transformation of already conceived body images into a matrix of points which can be visualized by the design software. The literature exhibits several techniques to obtain machine components dimensional fields, as contact instrument (MMC), calipers and optical methods as laser scanner, holograms as well as moiré methods. The objective of this research work was to analyze the moiré technique as instrument of reverse engineering, applied to bodies of nom complex geometry as simple solid figures, creating matrices of points. These matrices were forwarded to a parametric software named SolidWorks to generate the virtual object. Volume data obtained by mechanical means, i.e., by caliper, the volume obtained through the moiré method and the volume generated by the SolidWorks software were compared and found to be in close agreement. This research work suggests the application of phase shifting moiré methods as instrument of reverse engineering, serving also to support farm machinery element designs.Keywords: Reverse engineering, Moiré technique, three dimensional image generation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3457383 Analysis and Research of Two-Level Scheduling Profile for Open Real-Time System
Authors: Yongxian Jin, Jingzhou Huang
Abstract:
In an open real-time system environment, the coexistence of different kinds of real-time and non real-time applications makes the system scheduling mechanism face new requirements and challenges. One two-level scheduling scheme of the open real-time systems is introduced, and points out that hard and soft real-time applications are scheduled non-distinctively as the same type real-time applications, the Quality of Service (QoS) cannot be guaranteed. It has two flaws: The first, it can not differentiate scheduling priorities of hard and soft real-time applications, that is to say, it neglects characteristic differences between hard real-time applications and soft ones, so it does not suit a more complex real-time environment. The second, the worst case execution time of soft real-time applications cannot be predicted exactly, so it is not worth while to cost much spending in order to assure all soft real-time applications not to miss their deadlines, and doing that may cause resource wasting. In order to solve this problem, a novel two-level real-time scheduling mechanism (including scheduling profile and scheduling algorithm) which adds the process of dealing with soft real-time applications is proposed. Finally, we verify real-time scheduling mechanism from two aspects of theory and experiment. The results indicate that our scheduling mechanism can achieve the following objectives. (1) It can reflect the difference of priority when scheduling hard and soft real-time applications. (2) It can ensure schedulability of hard real-time applications, that is, their rate of missing deadline is 0. (3) The overall rate of missing deadline of soft real-time applications can be less than 1. (4) The deadline of a non-real-time application is not set, whereas the scheduling algorithm that server 0 S uses can avoid the “starvation" of jobs and increase QOS. By doing that, our scheduling mechanism is more compatible with different types of applications and it will be applied more widely.
Keywords: Hard real-time, two-level scheduling profile, open real-time system, non-distinctive schedule, soft real-time
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1568382 Analysis of Highway Slope Failure by an Application of the Stereographic Projection
Authors: Chin-Yu Lee, Iau-Teh Wang
Abstract:
The mountain road slope failures triggered by earthquake activities and torrential rain namely to create the disaster. Province Road No. 24 is a main route to the Wutai Township. The area of the study is located at the mileages between 46K and 47K along the road. However, the road has been suffered frequent damages as a result of landslide and slope failures during typhoon seasons. An understanding of the sliding behaviors in the area appears to be necessary. Slope failures triggered by earthquake activities and heavy rainfalls occur frequently. The study is to understand the mechanism of slope failures and to look for the way to deal with the situation. In order to achieve these objectives, this paper is based on theoretical and structural geology data interpretation program to assess the potential slope sliding behavior. The study showed an intimate relationship between the landslide behavior of the slopes and the stratum materials, based on structural geology analysis method to analysis slope stability and finds the slope safety coefficient to predict the sites of destroyed layer. According to the case study and parameter analyses results, the slope mainly slips direction compared to the site located in the southeast area. Find rainfall to result in the rise of groundwater level is main reason of the landslide mechanism. Future need to set up effective horizontal drain at corrective location, that can effective restrain mountain road slope failures and increase stability of slope.Keywords: slope stability analysis, Stereographic Projection, wedge Failure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4687381 Enhanced Magnetoelastic Response near Morphotropic Phase Boundary in Ferromagnetic Materials: Experimental and Theoretical Analysis
Authors: Murtaza Adil, Sen Yang, Zhou Chao, Song Xiaoping
Abstract:
The morphotropic phase boundary (MPB) recently has attracted constant interest in ferromagnetic systems for obtaining enhanced large magnetoelastic response. In the present study, structural and magnetoelastic properties of MPB involved ferromagnetic Tb1-xGdxFe2 (0≤x≤1) system has been investigated. The change of easy magnetic direction from <111> to <100> with increasing x up MPB composition of x=0.9 is detected by step-scanned [440] synchrotron X-ray diffraction reflections. The Gd substitution for Tb changes the composition for the anisotropy compensation near MPB composition of x=0.9, which was confirmed by the analysis of detailed scanned XRD, magnetization curves and the calculation of the first anisotropy constant K1. The spin configuration diagram accompanied with different crystal structures for Tb1-xGdxFe2 was designed. The calculated first anisotropy constant K1 shows a minimum value at MPB composition of x=0.9. In addition, the large ratio between magnetostriction, and the absolute values of the first anisotropy constant │λS∕K1│ appears at MPB composition, which makes it a potential material for magnetostrictive application. Based on experimental results, a theoretically approach was also proposed to signify that the facilitated magnetization rotation and enhanced magnetoelastic effect near MPB composition are a consequence of the anisotropic flattening of free energy of ferromagnetic crystal. Our work specifies the universal existence of MPB in ferromagnetic materials which is important for substantial improvement of magnetic and magnetostrictive properties and may provide a new route to develop advanced functional materials.Keywords: Free energy, lattice distortion, magnetic anisotropy, magnetostriction, morphotropic phase boundary.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1252380 Wireless Sensor Networks for Swiftlet Farms Monitoring
Authors: Al-Khalid Othman, Wan A. Wan Zainal Abidin, Kee M. Lee, Hushairi Zen, Tengku. M. A. Zulcaffle, Kuryati Kipli
Abstract:
This paper provides an in-depth study of Wireless Sensor Network (WSN) application to monitor and control the swiftlet habitat. A set of system design is designed and developed that includes the hardware design of the nodes, Graphical User Interface (GUI) software, sensor network, and interconnectivity for remote data access and management. System architecture is proposed to address the requirements for habitat monitoring. Such applicationdriven design provides and identify important areas of further work in data sampling, communications and networking. For this monitoring system, a sensor node (MTS400), IRIS and Micaz radio transceivers, and a USB interfaced gateway base station of Crossbow (Xbow) Technology WSN are employed. The GUI of this monitoring system is written using a Laboratory Virtual Instrumentation Engineering Workbench (LabVIEW) along with Xbow Technology drivers provided by National Instrument. As a result, this monitoring system is capable of collecting data and presents it in both tables and waveform charts for further analysis. This system is also able to send notification message by email provided Internet connectivity is available whenever changes on habitat at remote sites (swiftlet farms) occur. Other functions that have been implemented in this system are the database system for record and management purposes; remote access through the internet using LogMeIn software. Finally, this research draws a conclusion that a WSN for monitoring swiftlet habitat can be effectively used to monitor and manage swiftlet farming industry in Sarawak.Keywords: Swiftlet, WSN, Habitat Monitoring, Networking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2756379 Power System Damping Using Hierarchical Fuzzy Multi- Input PSS and Communication Lines Active Power Deviations Input and SVC
Authors: Mohammad Hasan Raouf, Ahmad Rouhani, Mohammad Abedini, Ebrahim Rasooli Anarmarzi
Abstract:
In this paper the application of a hierarchical fuzzy system (HFS) based on MPSS and SVC in multi-machine environment is studied. Also the effect of communication lines active power variance signal between two ΔPTie-line regions, as one of the inputs of hierarchical fuzzy multi-input PSS and SVC (HFMPSS & SVC), on the increase of low frequency oscillation damping is examined. In the MPSS, to have better efficiency an auxiliary signal of reactive power deviation (ΔQ) is added with ΔP+ Δω input type PSS. The number of rules grows exponentially with the number of variables in a classic fuzzy system. To reduce the number of rules the HFS consists of a number of low-dimensional fuzzy systems in a hierarchical structure. Phasor model of SVC is described and used in this paper. The performances of MPSS and ΔPTie-line based HFMPSS and also the proposed method in damping inter-area mode of oscillation are examined in response to disturbances. The efficiency of the proposed model is examined by simulating a four-machine power system. Results show that the proposed method is performing satisfactorily within the whole range of disturbances and reduces the cost of system.
Keywords: Communication lines active power variance signal, Hierarchical fuzzy system (HFS), Multi-input power system stabilizer (MPSS), Static VAR compensator (SVC).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1670378 Ovshinsky Effect by Quantum Mechanics
Authors: Thomas V. Prevenslik
Abstract:
Ovshinsky initiated scientific research in the field of amorphous and disordered materials that continues to this day. The Ovshinsky Effect where the resistance of thin GST films is significantly reduced upon the application of low voltage is of fundamental importance in phase-change - random access memory (PC-RAM) devices.GST stands for GdSbTe chalcogenide type glasses.However, the Ovshinsky Effect is not without controversy. Ovshinsky thought the resistance of GST films is reduced by the redistribution of charge carriers; whereas, others at that time including many PC-RAM researchers today argue that the GST resistance changes because the GST amorphous state is transformed to the crystalline state by melting, the heat supplied by external heaters. In this controversy, quantum mechanics (QM) asserts the heat capacity of GST films vanishes, and therefore melting cannot occur as the heat supplied cannot be conserved by an increase in GST film temperature.By precluding melting, QM re-opens the controversy between the melting and charge carrier mechanisms. Supporting analysis is presented to show that instead of increasing GST film temperature, conservation proceeds by the QED induced creation of photons within the GST film, the QED photons confined by TIR. QED stands for quantum electrodynamics and TIR for total internal reflection. The TIR confinement of QED photons is enhanced by the fact the absorbedheat energy absorbed in the GST film is concentrated in the TIR mode because of their high surface to volume ratio. The QED photons having Planck energy beyond the ultraviolet produce excitons by the photoelectric effect, the electrons and holes of which reduce the GST film resistance.Keywords: Ovshinsky, phase change memory, PC-RAM, chalcogenide, quantummechanics, quantum electrodynamics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1691377 A Legal Opinion on Mitigation and Adaptation on Air Pollution Strategies for Local Governments in South Africa
Authors: Marjone Van Der Bank, C. M. Van Der Bank
Abstract:
This paper presents an overview of the foundation and evolution of environmental related problems in local governments with specific reference on air pollution in South Africa. Local government has a direct mandate in terms of the Constitution of the Republic of South Africa, 1996 (hereafter, the Constitution). This mandate to protect, fulfil, respect and promote the Bill of Rights by local governments in respect of the powers and functions creates confusion around the role of where a local government fits in, in addressing the problem of climate change in South Africa. A reflection of the evolving legislations, developments, and processes regarding climate change that shaped local government dispensation in South Africa is addressed by the notion of developmental local governments. This paper seeks to examine the advances for mitigation and adaptation regulation of air pollution and application in South Africa. This study involves a qualitative approach that will involve South African national legislation as well as an interpretation of international strategies. A literature review study was conducted to undertake the various aspects of law in order to support the argument undertaken of mitigation and adaptation strategies. The paper presents a detailed discussion of the current legislation and the position as it currently stands, as well as the relevant protections as outlined in the National Environmental Management Act and the National Environmental Management: Air Quality Act. It then proceeds to outline the responsibilities of local governments in South Africa to mitigate and adapt to air pollution strategies.
Keywords: Adaptation, climate change, disaster, local governments, mitigation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 836376 Implementation of a Paraconsistent-Fuzzy Digital PID Controller in a Level Control Process
Authors: H. M. Côrtes, J. I. Da Silva Filho, M. F. Blos, B. S. Zanon
Abstract:
In a modern society the factor corresponding to the increase in the level of quality in industrial production demand new techniques of control and machinery automation. In this context, this work presents the implementation of a Paraconsistent-Fuzzy Digital PID controller. The controller is based on the treatment of inconsistencies both in the Paraconsistent Logic and in the Fuzzy Logic. Paraconsistent analysis is performed on the signals applied to the system inputs using concepts from the Paraconsistent Annotated Logic with annotation of two values (PAL2v). The signals resulting from the paraconsistent analysis are two values defined as Dc - Degree of Certainty and Dct - Degree of Contradiction, which receive a treatment according to the Fuzzy Logic theory, and the resulting output of the logic actions is a single value called the crisp value, which is used to control dynamic system. Through an example, it was demonstrated the application of the proposed model. Initially, the Paraconsistent-Fuzzy Digital PID controller was built and tested in an isolated MATLAB environment and then compared to the equivalent Digital PID function of this software for standard step excitation. After this step, a level control plant was modeled to execute the controller function on a physical model, making the tests closer to the actual. For this, the control parameters (proportional, integral and derivative) were determined for the configuration of the conventional Digital PID controller and of the Paraconsistent-Fuzzy Digital PID, and the control meshes in MATLAB were assembled with the respective transfer function of the plant. Finally, the results of the comparison of the level control process between the Paraconsistent-Fuzzy Digital PID controller and the conventional Digital PID controller were presented.
Keywords: Fuzzy logic, paraconsistent annotated logic, level control, digital PID.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1237375 An Automated Approach to the Nozzle Configuration of Polycrystalline Diamond Compact Drill Bits for Effective Cuttings Removal
Authors: R. Suresh, Pavan Kumar Nimmagadda, Ming Zo Tan, Shane Hart, Sharp Ugwuocha
Abstract:
Polycrystalline diamond compact (PDC) drill bits are extensively used in the oil and gas industry as well as the mining industry. Industry engineers continually improve upon PDC drill bit designs and hydraulic conditions. Optimized injection nozzles play a key role in improving the drilling performance and efficiency of these ever changing PDC drill bits. In the first part of this study, computational fluid dynamics (CFD) modelling is performed to investigate the hydrodynamic characteristics of drilling fluid flow around the PDC drill bit. An Open-source CFD software – OpenFOAM simulates the flow around the drill bit, based on the field input data. A specifically developed console application integrates the entire CFD process including, domain extraction, meshing, and solving governing equations and post-processing. The results from the OpenFOAM solver are then compared with that of the ANSYS Fluent software. The data from both software programs agree. The second part of the paper describes the parametric study of the PDC drill bit nozzle to determine the effect of parameters such as number of nozzles, nozzle velocity, nozzle radial position and orientations on the flow field characteristics and bit washing patterns. After analyzing a series of nozzle configurations, the best configuration is identified and recommendations are made for modifying the PDC bit design.
Keywords: ANSYS Fluent, computational fluid dynamics, nozzle configuration, OpenFOAM, PDC dill bit.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 987374 Application of RS and GIS Technique for Identifying Groundwater Potential Zone in Gomukhi Nadhi Sub Basin, South India
Authors: Punitha Periyasamy, Mahalingam Sudalaimuthu, Sachikanta Nanda, Arasu Sundaram
Abstract:
India holds 17.5% of the world’s population but has only 2% of the total geographical area of the world where 27.35% of the area is categorized as wasteland due to lack of or less groundwater. So there is a demand for excessive groundwater for agricultural and non agricultural activities to balance its growth rate. With this in mind, an attempt is made to find the groundwater potential zone in Gomukhi Nadhi sub basin of Vellar River basin, TamilNadu, India covering an area of 1146.6 Sq.Km consists of 9 blocks from Peddanaickanpalayam to Virudhachalam in the sub basin. The thematic maps such as Geology, Geomorphology, Lineament, Landuse and Landcover and Drainage are prepared for the study area using IRS P6 data. The collateral data includes rainfall, water level, soil map are collected for analysis and inference. The digital elevation model (DEM) is generated using Shuttle Radar Topographic Mission (SRTM) and the slope of the study area is obtained. ArcGIS 10.1 acts as a powerful spatial analysis tool to find out the ground water potential zones in the study area by means of weighted overlay analysis. Each individual parameter of the thematic maps are ranked and weighted in accordance with their influence to increase the water level in the ground. The potential zones in the study area are classified viz., Very Good, Good, Moderate, Poor with its aerial extent of 15.67, 381.06, 575.38, 174.49 Sq.Km respectively.
Keywords: ArcGIS, DEM, Groundwater, Recharge, Weighted Overlay.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2993373 Preparation of Fe3Si/Ferrite Micro- and Nano-Powder Composite
Authors: R. Bures, M. Streckova, M. Faberova, P. Kurek
Abstract:
Composite material based on Fe3Si micro-particles and Mn-Zn nano-ferrite was prepared using powder metallurgy technology. The sol-gel followed by autocombustion process was used for synthesis of Mn0.8Zn0.2Fe2O4 ferrite. 3 wt.% of mechanically milled ferrite was mixed with Fe3Si powder alloy. Mixed micro-nano powder system was homogenized by the Resonant Acoustic Mixing using ResodynLabRAM Mixer. This non-invasive homogenization technique was used to preserve spherical morphology of Fe3Si powder particles. Uniaxial cold pressing in the closed die at pressure 600 MPa was applied to obtain a compact sample. Microwave sintering of green compact was realized at 800°C, 20 minutes, in air. Density of the powders and composite was measured by Hepycnometry. Impulse excitation method was used to measure elastic properties of sintered composite. Mechanical properties were evaluated by measurement of transverse rupture strength (TRS) and Vickers hardness (HV). Resistivity was measured by 4 point probe method. Ferrite phase distribution in volume of the composite was documented by metallographic analysis. It has been found that nano-ferrite particle distributed among micro- particles of Fe3Si powder alloy led to high relative density (~93%) and suitable mechanical properties (TRS >100 MPa, HV ~1GPa, E-modulus ~140 GPa) of the composite. High electric resistivity (R~6.7 ohm.cm) of prepared composite indicate their potential application as soft magnetic material at medium and high frequencies.
Keywords: Micro- and nano-composite, soft magnetic materials, microwave sintering, mechanical and electric properties.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3793372 Developing Proof Demonstration Skills in Teaching Mathematics in the Secondary School
Authors: M. Rodionov, Z. Dedovets
Abstract:
The article describes the theoretical concept of teaching secondary school students proof demonstration skills in mathematics. It describes in detail different levels of mastery of the concept of proof-which correspond to Piaget’s idea of there being three distinct and progressively more complex stages in the development of human reflection. Lessons for each level contain a specific combination of the visual-figurative components and deductive reasoning. It is vital at the transition point between levels to carefully and rigorously recalibrate teaching to reflect the development of more complex reflective understanding. This can apply even within the same age range, since students will develop at different speeds and to different potential. The authors argue that this requires an aware and adaptive approach to lessons to reflect this complexity and variation. The authors also contend that effective teaching which enables students to properly understand the implementation of proof arguments must develop specific competences. These are: understanding of the importance of completeness and generality in making a valid argument; being task focused; having an internalised locus of control and being flexible in approach and evaluation. These criteria must be correlated with the systematic application of corresponding methodologies which are best likely to achieve success. The particular pedagogical decisions which are made to deliver this objective are illustrated by concrete examples from the existing secondary school mathematics courses. The proposed theoretical concept formed the basis of the development of methodological materials which have been tested in 47 secondary schools.
Keywords: Education, teaching of mathematics, proof, deductive reasoning, secondary school.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 905371 Use of Vegetation and Geo-Jute in Erosion Control of Slopes in a Sub-Tropical Climate
Authors: Mohammad Shariful Islam, Shamima Nasrin, Md. Shahidul Islam, Farzana Rahman Moury
Abstract:
Protection of slope and embankment from erosion has become an important issue in Bangladesh. The constructions of strong structures require large capital, integrated designing, high maintenance cost. Strong structure methods have negative impact on the environment and sometimes not function for the design period. Plantation of vetiver system along the slopes is an alternative solution. Vetiver not only serves the purpose of slope protection but also adds green environment reducing pollution. Vetiver is available in almost all the districts of Bangladesh. This paper presents the application of vetiver system with geo-jute, for slope protection and erosion control of embankments and slopes. In-situ shear tests have been conducted on vetiver rooted soil system to find the shear strength. The shear strength and effective soil cohesion of vetiver rooted soil matrix are respectively 2.0 times and 2.1 times higher than that of the bared soil. Similar trends have been found in direct shear tests conducted on laboratory reconstituted samples. Field trials have been conducted in road embankment and slope protection with vetiver at different sites. During the time of vetiver root growth the soil protection has been accomplished by geo-jute. As the geo-jute degrades with time, vetiver roots grow and take over the function of geo-jutes. Slope stability analyses showed that vegetation increase the factor of safety significantly.Keywords: Erosion, geo-jute, green technology, vegetation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4165370 Numerical Modal Analysis of a Multi-Material 3D-Printed Composite Bushing and Its Application
Authors: Paweł Żur, Alicja Żur, Andrzej Baier
Abstract:
Modal analysis is a crucial tool in the field of engineering for understanding the dynamic behavior of structures. In this study, numerical modal analysis was conducted on a multi-material 3D-printed composite bushing, which comprised a polylactic acid (PLA) outer shell and a thermoplastic polyurethane (TPU) flexible filling. The objective was to investigate the modal characteristics of the bushing and assess its potential for practical applications. The analysis involved the development of a finite element model of the bushing, which was subsequently subjected to modal analysis techniques. Natural frequencies, mode shapes, and damping ratios were determined to identify the dominant vibration modes and their corresponding responses. The numerical modal analysis provided valuable insights into the dynamic behavior of the bushing, enabling a comprehensive understanding of its structural integrity and performance. Furthermore, the study expanded its scope by investigating the entire shaft mounting of a small electric car, incorporating the 3D-printed composite bushing. The shaft mounting system was subjected to numerical modal analysis to evaluate its dynamic characteristics and potential vibrational issues. The results of the modal analysis highlighted the effectiveness of the 3D-printed composite bushing in minimizing vibrations and optimizing the performance of the shaft mounting system. The findings contribute to the broader field of composite material applications in automotive engineering and provide valuable insights for the design and optimization of similar components.
Keywords: 3D printing, composite bushing, modal analysis, multi-material.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 59369 Packet Forwarding with Multiprotocol Label Switching
Authors: R.N.Pise, S.A.Kulkarni, R.V.Pawar
Abstract:
MultiProtocol Label Switching (MPLS) is an emerging technology that aims to address many of the existing issues associated with packet forwarding in today-s Internetworking environment. It provides a method of forwarding packets at a high rate of speed by combining the speed and performance of Layer 2 with the scalability and IP intelligence of Layer 3. In a traditional IP (Internet Protocol) routing network, a router analyzes the destination IP address contained in the packet header. The router independently determines the next hop for the packet using the destination IP address and the interior gateway protocol. This process is repeated at each hop to deliver the packet to its final destination. In contrast, in the MPLS forwarding paradigm routers on the edge of the network (label edge routers) attach labels to packets based on the forwarding Equivalence class (FEC). Packets are then forwarded through the MPLS domain, based on their associated FECs , through swapping the labels by routers in the core of the network called label switch routers. The act of simply swapping the label instead of referencing the IP header of the packet in the routing table at each hop provides a more efficient manner of forwarding packets, which in turn allows the opportunity for traffic to be forwarded at tremendous speeds and to have granular control over the path taken by a packet. This paper deals with the process of MPLS forwarding mechanism, implementation of MPLS datapath , and test results showing the performance comparison of MPLS and IP routing. The discussion will focus primarily on MPLS IP packet networks – by far the most common application of MPLS today.Keywords: Forwarding equivalence class, incoming label map, label, next hop label forwarding entry.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2693368 Electricity Load Modeling: An Application to Italian Market
Authors: Giovanni Masala, Stefania Marica
Abstract:
Forecasting electricity load plays a crucial role regards decision making and planning for economical purposes. Besides, in the light of the recent privatization and deregulation of the power industry, the forecasting of future electricity load turned out to be a very challenging problem. Empirical data about electricity load highlights a clear seasonal behavior (higher load during the winter season), which is partly due to climatic effects. We also emphasize the presence of load periodicity at a weekly basis (electricity load is usually lower on weekends or holidays) and at daily basis (electricity load is clearly influenced by the hour). Finally, a long-term trend may depend on the general economic situation (for example, industrial production affects electricity load). All these features must be captured by the model. The purpose of this paper is then to build an hourly electricity load model. The deterministic component of the model requires non-linear regression and Fourier series while we will investigate the stochastic component through econometrical tools. The calibration of the parameters’ model will be performed by using data coming from the Italian market in a 6 year period (2007- 2012). Then, we will perform a Monte Carlo simulation in order to compare the simulated data respect to the real data (both in-sample and out-of-sample inspection). The reliability of the model will be deduced thanks to standard tests which highlight a good fitting of the simulated values.Keywords: ARMA-GARCH process, electricity load, fitting tests, Fourier series, Monte Carlo simulation, non-linear regression.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1486367 An Analysis of Collapse Mechanism of Thin- Walled Circular Tubes Subjected to Bending
Authors: Somya Poonaya, Chawalit Thinvongpituk, Umphisak Teeboonma
Abstract:
Circular tubes have been widely used as structural members in engineering application. Therefore, its collapse behavior has been studied for many decades, focusing on its energy absorption characteristics. In order to predict the collapse behavior of members, one could rely on the use of finite element codes or experiments. These tools are helpful and high accuracy but costly and require extensive running time. Therefore, an approximating model of tubes collapse mechanism is an alternative for early step of design. This paper is also aimed to develop a closed-form solution of thin-walled circular tube subjected to bending. It has extended the Elchalakani et al.-s model (Int. J. Mech. Sci.2002; 44:1117-1143) to include the rate of energy dissipation of rolling hinge in the circumferential direction. The 3-D geometrical collapse mechanism was analyzed by adding the oblique hinge lines along the longitudinal tube within the length of plastically deforming zone. The model was based on the principal of energy rate conservation. Therefore, the rates of internal energy dissipation were calculated for each hinge lines which are defined in term of velocity field. Inextensional deformation and perfect plastic material behavior was assumed in the derivation of deformation energy rate. The analytical result was compared with experimental result. The experiment was conducted with a number of tubes having various D/t ratios. Good agreement between analytical and experiment was achieved.Keywords: Bending, Circular tube, Energy, Mechanism.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3511366 Dynamic Threshold Adjustment Approach For Neural Networks
Authors: Hamza A. Ali, Waleed A. J. Rasheed
Abstract:
The use of neural networks for recognition application is generally constrained by their inherent parameters inflexibility after the training phase. This means no adaptation is accommodated for input variations that have any influence on the network parameters. Attempts were made in this work to design a neural network that includes an additional mechanism that adjusts the threshold values according to the input pattern variations. The new approach is based on splitting the whole network into two subnets; main traditional net and a supportive net. The first deals with the required output of trained patterns with predefined settings, while the second tolerates output generation dynamically with tuning capability for any newly applied input. This tuning comes in the form of an adjustment to the threshold values. Two levels of supportive net were studied; one implements an extended additional layer with adjustable neuronal threshold setting mechanism, while the second implements an auxiliary net with traditional architecture performs dynamic adjustment to the threshold value of the main net that is constructed in dual-layer architecture. Experiment results and analysis of the proposed designs have given quite satisfactory conducts. The supportive layer approach achieved over 90% recognition rate, while the multiple network technique shows more effective and acceptable level of recognition. However, this is achieved at the price of network complexity and computation time. Recognition generalization may be also improved by accommodating capabilities involving all the innate structures in conjugation with Intelligence abilities with the needs of further advanced learning phases.
Keywords: Classification, Recognition, Neural Networks, Pattern Recognition, Generalization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1627365 Identification of Outliers in Flood Frequency Analysis: Comparison of Original and Multiple Grubbs-Beck Test
Authors: Ayesha S. Rahman, Khaled Haddad, Ataur Rahman
Abstract:
At-site flood frequency analysis is used to estimate flood quantiles when at-site record length is reasonably long. In Australia, FLIKE software has been introduced for at-site flood frequency analysis. The advantage of FLIKE is that, for a given application, the user can compare a number of most commonly adopted probability distributions and parameter estimation methods relatively quickly using a windows interface. The new version of FLIKE has been incorporated with the multiple Grubbs and Beck test which can identify multiple numbers of potentially influential low flows. This paper presents a case study considering six catchments in eastern Australia which compares two outlier identification tests (original Grubbs and Beck test and multiple Grubbs and Beck test) and two commonly applied probability distributions (Generalized Extreme Value (GEV) and Log Pearson type 3 (LP3)) using FLIKE software. It has been found that the multiple Grubbs and Beck test when used with LP3 distribution provides more accurate flood quantile estimates than when LP3 distribution is used with the original Grubbs and Beck test. Between these two methods, the differences in flood quantile estimates have been found to be up to 61% for the six study catchments. It has also been found that GEV distribution (with L moments) and LP3 distribution with the multiple Grubbs and Beck test provide quite similar results in most of the cases; however, a difference up to 38% has been noted for flood quantiles for annual exceedance probability (AEP) of 1 in 100 for one catchment. This finding needs to be confirmed with a greater number of stations across other Australian states.
Keywords: Floods, FLIKE, probability distributions, flood frequency, outlier.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3311364 Intelligent Assistive Methods for Diagnosis of Rheumatoid Arthritis Using Histogram Smoothing and Feature Extraction of Bone Images
Authors: SP. Chokkalingam, K. Komathy
Abstract:
Advances in the field of image processing envision a new era of evaluation techniques and application of procedures in various different fields. One such field being considered is the biomedical field for prognosis as well as diagnosis of diseases. This plethora of methods though provides a wide range of options to select from, it also proves confusion in selecting the apt process and also in finding which one is more suitable. Our objective is to use a series of techniques on bone scans, so as to detect the occurrence of rheumatoid arthritis (RA) as accurately as possible. Amongst other techniques existing in the field our proposed system tends to be more effective as it depends on new methodologies that have been proved to be better and more consistent than others. Computer aided diagnosis will provide more accurate and infallible rate of consistency that will help to improve the efficiency of the system. The image first undergoes histogram smoothing and specification, morphing operation, boundary detection by edge following algorithm and finally image subtraction to determine the presence of rheumatoid arthritis in a more efficient and effective way. Using preprocessing noises are removed from images and using segmentation, region of interest is found and Histogram smoothing is applied for a specific portion of the images. Gray level co-occurrence matrix (GLCM) features like Mean, Median, Energy, Correlation, Bone Mineral Density (BMD) and etc. After finding all the features it stores in the database. This dataset is trained with inflamed and noninflamed values and with the help of neural network all the new images are checked properly for their status and Rough set is implemented for further reduction.
Keywords: Computer Aided Diagnosis, Edge Detection, Histogram Smoothing, Rheumatoid Arthritis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2479363 Analysis of Aiming Performance for Games Using Mapping Method of Corneal Reflections Based on Two Different Light Sources
Authors: Yoshikazu Onuki, Itsuo Kumazawa
Abstract:
Fundamental motivation of this paper is how gaze estimation can be utilized effectively regarding an application to games. In games, precise estimation is not always important in aiming targets but an ability to move a cursor to an aiming target accurately is also significant. Incidentally, from a game producing point of view, a separate expression of a head movement and gaze movement sometimes becomes advantageous to expressing sense of presence. A case that panning a background image associated with a head movement and moving a cursor according to gaze movement can be a representative example. On the other hand, widely used technique of POG estimation is based on a relative position between a center of corneal reflection of infrared light sources and a center of pupil. However, a calculation of a center of pupil requires relatively complicated image processing, and therefore, a calculation delay is a concern, since to minimize a delay of inputting data is one of the most significant requirements in games. In this paper, a method to estimate a head movement by only using corneal reflections of two infrared light sources in different locations is proposed. Furthermore, a method to control a cursor using gaze movement as well as a head movement is proposed. By using game-like-applications, proposed methods are evaluated and, as a result, a similar performance to conventional methods is confirmed and an aiming control with lower computation power and stressless intuitive operation is obtained.
Keywords: Point-of-gaze, gaze estimation, head movement, corneal reflections, two infrared light sources, game.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1071362 Evaluation of a New Method for Detection of Kidney Stone during Laparoscopy Using 3D Conceptual Modeling
Authors: Elnaz Afshari, Siamak Najarian, Naser Simforoosh, Siamak Hajizadeh Farkoush
Abstract:
Minimally invasive surgery (MIS) is now being widely used as a preferred choice for various types of operations. The need to detect various tactile properties, justifies the key role of tactile sensing that is currently missing in MIS. In this regard, Laparoscopy is one of the methods of minimally invasive surgery that can be used in kidney stone removal surgeries. At this moment, determination of the exact location of stone during laparoscopy is one of the limitations of this method that no scientific solution has been found for so far. Artificial tactile sensing is a new method for obtaining the characteristics of a hard object embedded in a soft tissue. Artificial palpation is an important application of artificial tactile sensing that can be used in different types of surgeries. In this study, a new method for determining the exact location of stone during laparoscopy is presented. In the present study, the effects of stone existence on the surface of kidney were investigated using conceptual 3D model of kidney containing a simulated stone. Having imitated palpation and modeled it conceptually, indications of stone existence that appear on the surface of kidney were determined. A number of different cases were created and solved by the software and using stress distribution contours and stress graphs, it is illustrated that the created stress patterns on the surface of kidney show not only the existence of stone inside, but also its exact location. So three-dimensional analysis leads to a novel method of predicting the exact location of stone and can be directly applied to the incorporation of tactile sensing in artificial palpation, helping surgeons in non-invasive procedures.
Keywords: Kidney Stone, Laparoscopic Surgery, Artificial Tactile Sensing, Finite Element Method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1794361 A Process of Forming a Single Competitive Factor in the Digital Camera Industry
Authors: Kiyohiro Yamazaki
Abstract:
This paper considers a forming process of a single competitive factor in the digital camera industry from the viewpoint of product platform. To make product development easier for companies and to increase product introduction ratios, development efforts concentrate on improving and strengthening certain product attributes, and it is born in the process that the product platform is formed continuously. It is pointed out that the formation of this product platform raises product development efficiency of individual companies, but on the other hand, it has a trade-off relationship of causing unification of competitive factors in the whole industry. This research tries to analyze product specification data which were collected from the web page of digital camera companies. Specifically, this research collected all product specification data released in Japan from 1995 to 2003 and analyzed the composition of image sensor and optical lens; and it identified product platforms shared by multiple products and discussed their application. As a result, this research found that the product platformation was born in the development of the standard product for major market segmentation. Every major company has made product platforms of image sensors and optical lenses, and as a result, this research found that the competitive factors were unified in the entire industry throughout product platformation. In other words, this product platformation brought product development efficiency of individual firms; however, it also caused industrial competition factors to be unified in the industry.
Keywords: Digital camera industry, product evolution trajectory, product platform, unification of competitive factors.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 652360 Effective Stacking of Deep Neural Models for Automated Object Recognition in Retail Stores
Authors: Ankit Sinha, Soham Banerjee, Pratik Chattopadhyay
Abstract:
Automated product recognition in retail stores is an important real-world application in the domain of Computer Vision and Pattern Recognition. In this paper, we consider the problem of automatically identifying the classes of the products placed on racks in retail stores from an image of the rack and information about the query/product images. We improve upon the existing approaches in terms of effectiveness and memory requirement by developing a two-stage object detection and recognition pipeline comprising of a Faster-RCNN-based object localizer that detects the object regions in the rack image and a ResNet-18-based image encoder that classifies the detected regions into the appropriate classes. Each of the models is fine-tuned using appropriate data sets for better prediction and data augmentation is performed on each query image to prepare an extensive gallery set for fine-tuning the ResNet-18-based product recognition model. This encoder is trained using a triplet loss function following the strategy of online-hard-negative-mining for improved prediction. The proposed models are lightweight and can be connected in an end-to-end manner during deployment to automatically identify each product object placed in a rack image. Extensive experiments using Grozi-32k and GP-180 data sets verify the effectiveness of the proposed model.
Keywords: Retail stores, Faster-RCNN, object localization, ResNet-18, triplet loss, data augmentation, product recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 584359 Engineering of E-Learning Content Creation: Case Study for African Countries
Authors: María-Dolores Afonso-Suárez, Nayra Pumar-Carreras, Juan Ruiz-Alzola
Abstract:
This research addresses the use of an e-Learning creation methodology for learning objects. Throughout the process, indicators are being gathered, to determine if it responds to the main objectives of an engineering discipline. These parameters will also indicate if it is necessary to review the creation cycle and readjust any phase. Within the project developed for this study, apart from the use of structured methods, there has been a central objective: the establishment of a learning atmosphere. A place where all the professionals involved are able to collaborate, plan, solve problems and determine guides to follow in order to develop creative and innovative solutions. It has been outlined as a blended learning program with an assessment plan that proposes face to face lessons, coaching, collaboration, multimedia and web based learning objects as well as support resources. The project has been drawn as a long term task, the pilot teaching actions designed provide the preliminary results object of study. This methodology is been used in the creation of learning content for the African countries of Senegal, Mauritania and Cape Verde. It has been developed within the framework of the MACbioIDi, an Interreg European project for the International cooperation and development. The educational area of this project is focused in the training and advice of professionals of the medicine as well as engineers in the use of applications of medical imaging technology, specifically the 3DSlicer application and the Open Anatomy Browser.
Keywords: Teaching contents engineering, e-learning, blended learning, international cooperation, 3DSlicer, open anatomy browser.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1048358 Extracting the Coupled Dynamics in Thin-Walled Beams from Numerical Data Bases
Authors: Mohammad A. Bani-Khaled
Abstract:
In this work we use the Discrete Proper Orthogonal Decomposition transform to characterize the properties of coupled dynamics in thin-walled beams by exploiting numerical simulations obtained from finite element simulations. The outcomes of the will improve our understanding of the linear and nonlinear coupled behavior of thin-walled beams structures. Thin-walled beams have widespread usage in modern engineering application in both large scale structures (aeronautical structures), as well as in nano-structures (nano-tubes). Therefore, detailed knowledge in regard to the properties of coupled vibrations and buckling in these structures are of great interest in the research community. Due to the geometric complexity in the overall structure and in particular in the cross-sections it is necessary to involve computational mechanics to numerically simulate the dynamics. In using numerical computational techniques, it is not necessary to over simplify a model in order to solve the equations of motions. Computational dynamics methods produce databases of controlled resolution in time and space. These numerical databases contain information on the properties of the coupled dynamics. In order to extract the system dynamic properties and strength of coupling among the various fields of the motion, processing techniques are required. Time- Proper Orthogonal Decomposition transform is a powerful tool for processing databases for the dynamics. It will be used to study the coupled dynamics of thin-walled basic structures. These structures are ideal to form a basis for a systematic study of coupled dynamics in structures of complex geometry.
Keywords: Coupled dynamics, geometric complexity, Proper Orthogonal Decomposition (POD), thin walled beams.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1016357 Modeling and Simulation of Ship Structures Using Finite Element Method
Authors: Javid Iqbal, Zhu Shifan
Abstract:
The development in the construction of unconventional ships and the implementation of lightweight materials have shown a large impulse towards finite element (FE) method, making it a general tool for ship design. This paper briefly presents the modeling and analysis techniques of ship structures using FE method for complex boundary conditions which are difficult to analyze by existing Ship Classification Societies rules. During operation, all ships experience complex loading conditions. These loads are general categories into thermal loads, linear static, dynamic and non-linear loads. General strength of the ship structure is analyzed using static FE analysis. FE method is also suitable to consider the local loads generated by ballast tanks and cargo in addition to hydrostatic and hydrodynamic loads. Vibration analysis of a ship structure and its components can be performed using FE method which helps in obtaining the dynamic stability of the ship. FE method has developed better techniques for calculation of natural frequencies and different mode shapes of ship structure to avoid resonance both globally and locally. There is a lot of development towards the ideal design in ship industry over the past few years for solving complex engineering problems by employing the data stored in the FE model. This paper provides an overview of ship modeling methodology for FE analysis and its general application. Historical background, the basic concept of FE, advantages, and disadvantages of FE analysis are also reported along with examples related to hull strength and structural components.
Keywords: Dynamic analysis, finite element methods, ship structure, vibration analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2467356 Color Characteristics of Dried Cocoa Using Shallow Box Fermentation Technique
Authors: Khairul Bariah Sulaiman, Tajul Aris Yang
Abstract:
Fermentation is well known as an essential process to develop chocolate flavor in dried cocoa beans. Besides developing the precursor of cocoa flavor, it also induces the color changes in the beans. The fermentation process is influenced by various factors such as planting material, preconditioning of cocoa pod and fermentation technique. Therefore, this study was conducted to evaluate color of Malaysian cocoa beans and how the duration of pods storage and fermentation technique using shallow box will effect on its color characteristics. There are two factors being studied i.e. duration of cocoa pod storage (0, 2, 4 and 6 days) and duration of cocoa fermentation (0, 1, 2, 3, 4 and 5 days). The experiment is arranged in 4 x 6 factorial designs with 24 treatments and arrangement is in a Completely Randomised Design (CRD). The produced beans are inspected for color changes under artificial light during cut test and divided into four groups of color namely fully brown, purple brown, fully purple and slaty. Cut tests indicated that cocoa beans which are directly dried without undergone fermentation has the highest slaty percentage. However, application of pods storage before fermentation process is found to decrease the slaty percentage. In contrast, the percentages of fully brown beans start to dominate after two days of fermentation, especially from four and six days of pods storage batch. Whereas, almost all batches of cocoa beans have a percentage of fully purple less than 20%. Interestingly, the percentage of purple brown beans are scattered in the entire beans batch regardless any specific trend. Meanwhile, statistical analysis using General Linear Model showed that the pods storage has a significant effect on the color characteristic of the Malaysian dried beans compared to fermentation duration.Keywords: Cocoa beans, color, fermentation, shallow box.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2975355 Simulation of Utility Accrual Scheduling and Recovery Algorithm in Multiprocessor Environment
Authors: A. Idawaty, O. Mohamed, A. Z. Zuriati
Abstract:
This paper presents the development of an event based Discrete Event Simulation (DES) for a recovery algorithm known Backward Recovery Global Preemptive Utility Accrual Scheduling (BR_GPUAS). This algorithm implements the Backward Recovery (BR) mechanism as a fault recovery solution under the existing Time/Utility Function/ Utility Accrual (TUF/UA) scheduling domain for multiprocessor environment. The BR mechanism attempts to take the faulty tasks back to its initial safe state and then proceeds to re-execute the affected section of the faulty tasks to enable recovery. Considering that faults may occur in the components of any system; a fault tolerance system that can nullify the erroneous effect is necessary to be developed. Current TUF/UA scheduling algorithm uses the abortion recovery mechanism and it simply aborts the erroneous task as their fault recovery solution. None of the existing algorithm in TUF/UA scheduling domain in multiprocessor scheduling environment have considered the transient fault and implement the BR mechanism as a fault recovery mechanism to nullify the erroneous effect and solve the recovery problem in this domain. The developed BR_GPUAS simulator has derived the set of parameter, events and performance metrics according to a detailed analysis of the base model. Simulation results revealed that BR_GPUAS algorithm can saved almost 20-30% of the accumulated utilities making it reliable and efficient for the real-time application in the multiprocessor scheduling environment.
Keywords: Time Utility Function/ Utility Accrual (TUF/UA) scheduling, Real-time system (RTS), Backward Recovery, Multiprocessor, Discrete Event Simulation (DES).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 969