Search results for: geographic location inquiry service
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2055

Search results for: geographic location inquiry service

105 Plasma Spraying of 316 Stainless Steel on Aluminum and Investigation of Coat/Substrate Interface

Authors: P. Abachi, T. W. Coyle, P. S. Musavi Gharavi

Abstract:

By applying coating onto a structural component, the corrosion and/or wear resistance requirements of the surface can be fulfilled. Since the layer adhesion of the coating influences the mechanical integrity of the coat/substrate interface during the service time, it should be examined accurately. At the present work, the tensile bonding strength of the 316 stainless steel plasma sprayed coating on aluminum substrate was determined by using tensile adhesion test, TAT, specimen. The interfacial fracture toughness was specified using four-point bend specimen containing a saw notch and modified chevron-notched short-bar (SB) specimen. The coating microstructure and fractured specimen surface were examined by using scanning electron- and optical-microscopy. The investigation of coated surface after tensile adhesion test indicates that the failure mechanism is mostly cohesive and rarely adhesive type. The calculated value of critical strain energy release rate proposes relatively good interface status. It seems that four-point bending test offers a potentially more sensitive means for evaluation of mechanical integrity of coating/substrate interfaces than is possible with the tensile test. The fracture toughness value reported for the modified chevron-notched short-bar specimen testing cannot be taken as absolute value because its calculation is based on the minimum stress intensity coefficient value which has been suggested for the fracture toughness determination of homogeneous parts in the ASTM E1304-97 standard. 

Keywords: Bonding strength, four-point bend test, interfacial fracture toughness, modified chevron-notched short-bar specimen, plasma sprayed coating.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1555
104 A Novel Approach to Allocate Channels Dynamically in Wireless Mesh Networks

Authors: Y. Harold Robinson, M. Rajaram

Abstract:

Wireless mesh networking is rapidly gaining in popularity with a variety of users: from municipalities to enterprises, from telecom service providers to public safety and military organizations. This increasing popularity is based on two basic facts: ease of deployment and increase in network capacity expressed in bandwidth per footage; WMNs do not rely on any fixed infrastructure. Many efforts have been used to maximizing throughput of the network in a multi-channel multi-radio wireless mesh network. Current approaches are purely based on either static or dynamic channel allocation approaches. In this paper, we use a hybrid multichannel multi radio wireless mesh networking architecture, where static and dynamic interfaces are built in the nodes. Dynamic Adaptive Channel Allocation protocol (DACA), it considers optimization for both throughput and delay in the channel allocation. The assignment of the channel has been allocated to be codependent with the routing problem in the wireless mesh network and that should be based on passage flow on every link. Temporal and spatial relationship rises to re compute the channel assignment every time when the pattern changes in mesh network, channel assignment algorithms assign channels in network. In this paper a computing path which captures the available path bandwidth is the proposed information and the proficient routing protocol based on the new path which provides both static and dynamic links. The consistency property guarantees that each node makes an appropriate packet forwarding decision and balancing the control usage of the network, so that a data packet will traverse through the right path.

Keywords: Wireless mesh network, spatial time division multiple access, hybrid topology, timeslot allocation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1812
103 Productivity Effect of Urea Deep Placement Technology: An Empirical Analysis from Irrigation Rice Farmers in the Northern Region of Ghana

Authors: Shaibu Baanni Azumah, Ignatius Tindjina, Stella Obanyi, Tara N. Wood

Abstract:

This study examined the effect of Urea Deep Placement (UDP) technology on the output of irrigated rice farmers in the northern region of Ghana. Multi-stage sampling technique was used to select 142 rice farmers from the Golinga and Bontanga irrigation schemes, around Tamale. A treatment effect model was estimated at two stages; firstly, to determine the factors that influenced farmers’ decision to adopt the UDP technology and secondly, to determine the effect of the adoption of the UDP technology on the output of rice farmers. The significant variables that influenced rice farmers’ adoption of the UPD technology were sex of the farmer, land ownership, off-farm activity, extension service, farmer group participation and training. The results also revealed that farm size and the adoption of UDP technology significantly influenced the output of rice farmers in the northern region of Ghana. In addition to the potential of the technology to improve yields, it also presents an employment opportunity for women and youth, who are engaged in the deep placement of Urea Super Granules (USG), as well as in the transplantation of rice. It is recommended that the government of Ghana work closely with the IFDC to embed the UDP technology in the national agricultural programmes and policies. The study also recommends an effective collaboration between the government, through the Ministry of Food and Agriculture (MoFA) and the International Fertilizer Development Center (IFDC) to train agricultural extension agents on UDP technology in the rice producing areas of the country.

Keywords: Northern Ghana, output, irrigation rice farmers, treatment effect model, urea deep placement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1095
102 Transportation Mode Choice Analysis for Accessibility of the Mehrabad International Airport by Statistical Models

Authors: N. Mirzaei Varzeghani, M. Saffarzadeh, A. Naderan, A. Taheri

Abstract:

Countries are progressing, and the world's busiest airports see year-on-year increases in travel demand. Passenger acceptability of an airport depends on the airport's appeals, which may include one of these routes between the city and the airport, as well as the facilities to reach them. One of the critical roles of transportation planners is to predict future transportation demand so that an integrated, multi-purpose system can be provided and diverse modes of transportation (rail, air, and land) can be delivered to a destination like an airport. In this study, 356 questionnaires were filled out in person over six days. First, the attraction of business and non-business trips was studied using data and a linear regression model. Lower travel costs, more passengers aged 55 and older using this airport, and other factors are essential for business trips. Non-business travelers, on the other hand, have prioritized using personal vehicles to get to the airport and ensuring convenient access to the airport. Business travelers are also less price-sensitive than non-business travelers regarding airport travel. Furthermore, carrying additional luggage (for example, more than one suitcase per person) undoubtedly decreases the attractiveness of public transit. Afterward, based on the manner and purpose of the trip, the locations with the highest trip generation to the airport were identified. The most famous district in Tehran was District 2, with 23 visits, while the most popular mode of transportation was an online taxi, with 12 trips from that location. Then, significant variables in separation and behavior of travel methods to access the airport were investigated for all systems. In this scenario, the most crucial factor is the time it takes to get to the airport, followed by the method's user-friendliness as a component of passenger preference. It has also been demonstrated that enhancing public transportation trip times reduces private transportation's market share, including taxicabs. Based on the responses of personal and semi-public vehicles, the desire of passengers to approach the airport via public transportation systems was explored to enhance present techniques and develop new strategies for providing the most efficient modes of transportation. Using the binary model, it was clear that business travelers and people who had already driven to the airport were the least likely to change.

Keywords: Multimodal transportation, travel behavior, demand modeling, statistical models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 443
101 ZMP Based Reference Generation for Biped Walking Robots

Authors: Kemalettin Erbatur, Özer Koca, Evrim Taşkıran, Metin Yılmaz, Utku Seven

Abstract:

Recent fifteen years witnessed fast improvements in the field of humanoid robotics. The human-like robot structure is more suitable to human environment with its supreme obstacle avoidance properties when compared with wheeled service robots. However, the walking control for bipedal robots is a challenging task due to their complex dynamics. Stable reference generation plays a very important role in control. Linear Inverted Pendulum Model (LIPM) and the Zero Moment Point (ZMP) criterion are applied in a number of studies for stable walking reference generation of biped walking robots. This paper follows this main approach too. We propose a natural and continuous ZMP reference trajectory for a stable and human-like walk. The ZMP reference trajectories move forward under the sole of the support foot when the robot body is supported by a single leg. Robot center of mass trajectory is obtained from predefined ZMP reference trajectories by a Fourier series approximation method. The Gibbs phenomenon problem common with Fourier approximations of discontinuous functions is avoided by employing continuous ZMP references. Also, these ZMP reference trajectories possess pre-assigned single and double support phases, which are very useful in experimental tuning work. The ZMP based reference generation strategy is tested via threedimensional full-dynamics simulations of a 12-degrees-of-freedom biped robot model. Simulation results indicate that the proposed reference trajectory generation technique is successful.

Keywords: Biped robot, Linear Inverted Pendulum Model, Zero Moment Point, Fourier series approximation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1598
100 Design of a Hand-Held, Clamp-on, Leakage Current Sensor for High Voltage Direct Current Insulators

Authors: Morné Roman, Robert van Zyl, Nishanth Parus, Nishal Mahatho

Abstract:

Leakage current monitoring for high voltage transmission line insulators is of interest as a performance indicator. Presently, to the best of our knowledge, there is no commercially available, clamp-on type, non-intrusive device for measuring leakage current on energised high voltage direct current (HVDC) transmission line insulators. The South African power utility, Eskom, is investigating the development of such a hand-held sensor for two important applications; first, for continuous real-time condition monitoring of HVDC line insulators and, second, for use by live line workers to determine if it is safe to work on energised insulators. In this paper, a DC leakage current sensor based on magnetic field sensing techniques is developed. The magnetic field sensor used in the prototype can also detect alternating current up to 5 MHz. The DC leakage current prototype detects the magnetic field associated with the current flowing on the surface of the insulator. Preliminary HVDC leakage current measurements are performed on glass insulators. The results show that the prototype can accurately measure leakage current in the specified current range of 1-200 mA. The influence of external fields from the HVDC line itself on the leakage current measurements is mitigated through a differential magnetometer sensing technique. Thus, the developed sensor can perform measurements on in-service HVDC insulators. The research contributes to the body of knowledge by providing a sensor to measure leakage current on energised HVDC insulators non-intrusively. This sensor can also be used by live line workers to inform them whether or not it is safe to perform maintenance on energized insulators.

Keywords: Direct current, insulator, leakage current, live line, magnetic field, sensor, transmission lines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 872
99 Detecting Tomato Flowers in Greenhouses Using Computer Vision

Authors: Dor Oppenheim, Yael Edan, Guy Shani

Abstract:

This paper presents an image analysis algorithm to detect and count yellow tomato flowers in a greenhouse with uneven illumination conditions, complex growth conditions and different flower sizes. The algorithm is designed to be employed on a drone that flies in greenhouses to accomplish several tasks such as pollination and yield estimation. Detecting the flowers can provide useful information for the farmer, such as the number of flowers in a row, and the number of flowers that were pollinated since the last visit to the row. The developed algorithm is designed to handle the real world difficulties in a greenhouse which include varying lighting conditions, shadowing, and occlusion, while considering the computational limitations of the simple processor in the drone. The algorithm identifies flowers using an adaptive global threshold, segmentation over the HSV color space, and morphological cues. The adaptive threshold divides the images into darker and lighter images. Then, segmentation on the hue, saturation and volume is performed accordingly, and classification is done according to size and location of the flowers. 1069 images of greenhouse tomato flowers were acquired in a commercial greenhouse in Israel, using two different RGB Cameras – an LG G4 smartphone and a Canon PowerShot A590. The images were acquired from multiple angles and distances and were sampled manually at various periods along the day to obtain varying lighting conditions. Ground truth was created by manually tagging approximately 25,000 individual flowers in the images. Sensitivity analyses on the acquisition angle of the images, periods throughout the day, different cameras and thresholding types were performed. Precision, recall and their derived F1 score were calculated. Results indicate better performance for the view angle facing the flowers than any other angle. Acquiring images in the afternoon resulted with the best precision and recall results. Applying a global adaptive threshold improved the median F1 score by 3%. Results showed no difference between the two cameras used. Using hue values of 0.12-0.18 in the segmentation process provided the best results in precision and recall, and the best F1 score. The precision and recall average for all the images when using these values was 74% and 75% respectively with an F1 score of 0.73. Further analysis showed a 5% increase in precision and recall when analyzing images acquired in the afternoon and from the front viewpoint.

Keywords: Agricultural engineering, computer vision, image processing, flower detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2324
98 Modern Day Second Generation Military Filipino Amerasians and Ghosts of the U.S. Military Prostitution System in West Central Luzon’s ‘AMO Amerasian Triangle’

Authors: P. C. Kutschera, Elena C. Tesoro, Mary Grace Talamera-Sandico, Jose Maria G. Pelayo III

Abstract:

Second generation military Filipino Amerasians comprise a formidable contemporary segment of the estimated 250,000-plus biracial Amerasians in the Philippines today. Overall, they are a stigmatized and socioeconomically marginalized diaspora; historically, they were abandoned or estranged by U.S. military personnel fathers assigned during the century-long Colonial, Post- World War II and Cold War Era of permanent military basing (1898- 1992). Indeed, U.S. military personnel are assigned in smaller numbers in the Philippines today. This inquiry is an outgrowth of two recent small sample studies. The first surfaced the impact of the U.S. military prostitution system on formation of the ‘Derivative Amerasian Family Construct’ on first generation Amerasians; a second, qualitative case study suggested the continued effect of the prostitution systems' destructive impetuous on second generation Amerasians. The intent of this current qualitative, multiple-case study was to actively seek out second generation sex industry toilers. The purpose was to focus further on this human phenomenon in the postbasing and post-military prostitution system eras. As background, the former military prostitution apparatus has transformed into a modern dynamic of rampant sex tourism and prostitution nationwide. This is characterized by hotel and resorts offering unrestricted carnal access, urban and provincial brothels (casas), discos, bars and pickup clubs, massage parlors, local barrio karaoke bars and street prostitution. A small case study sample (N = 4) of female and male second generation Amerasians were selected. Sample formation employed a non-probability ‘snowball’ technique drawing respondents from the notorious Angeles, Metro Manila, Olongapo City ‘AMO Amerasian Triangle’ where most former U.S. military installations were sited and modern sex tourism thrives. A six-month study and analysis of in-depth interviews of female and male sex laborers, their families and peers revealed a litany of disturbing, and troublesome experiences. Results showed profiles of debilitating human poverty, history of family disorganization, stigmatization, social marginalization and the ghost of the military prostitution system and its harmful legacy on Amerasian family units. Emerging were testimonials of wayward young people ensnared in a maelstrom of deep economic deprivation, familial dysfunction, psychological desperation and societal indifference. The paper recommends that more study is needed and implications of unstudied psychosocial and socioeconomic experiences of distressed younger generations of military Amerasians require specific research. Heretofore apathetic or disengaged U.S. institutions need to confront the issue and formulate activist and solution-oriented social welfare, human services and immigration easement policies and alternatives. These institutions specifically include academic and social science research agencies, corporate foundations, the U.S. Congress, and Departments of State, Defense and Health and Human Services, and Homeland Security (i.e. Citizen and Immigration Services) It is them who continue to endorse a laissez-faire policy of non-involvement over the entire Filipino Amerasian question. Such apathy, the paper concludes, relegates this consequential but neglected blood progeny to the status of humiliating destitution and exploitation. Amerasians; thus, remain entrapped in their former colonial, and neo-colonial habitat. Ironically, they are unwitting victims of a U.S. American homeland that fancies itself geo-politically as a strong and strategic military treaty ally of the Philippines in the Western Pacific.

Keywords: Asian Americans, Filipino Amerasians, diaspora, military prostitution, stigmatization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2522
97 Why Are Entrepreneurs Resistant to E-tools?

Authors: D. Ščeulovs, E. Gaile-Sarkane

Abstract:

Latvia is the fourth in the world by means of broadband internet speed. The total number of internet users in Latvia exceeds 70% of its population. The number of active mailboxes of the local internet e-mail service Inbox.lv accounts for 68% of the population and 97.6% of the total number of internet users. The Latvian portal Draugiem.lv is a phenomenon of social media, because 58.4 % of the population and 83.5% of internet users use it. A majority of Latvian company profiles are available on social networks, the most popular being Twitter.com. These and other parameters prove the fact consumers and companies are actively using the Internet. 

However, after the authors in a number of studies analyzed how enterprises are employing the e-environment, namely, e-environment tools, they arrived to the conclusions that are not as flattering as the aforementioned statistics. There is an obvious contradiction between the statistical data and the actual studies. As a result, the authors have posed a question: Why are entrepreneurs resistant to e-tools? In order to answer this question, the authors have addressed the Technology Acceptance Model (TAM). The authors analyzed each phase and determined several factors affecting the use of e-environment, reaching the main conclusion that entrepreneurs do not have a sufficient level of e-literacy (digital literacy). 

The authors employ well-established quantitative and qualitative methods of research: grouping, analysis, statistic method, factor analysis in SPSS 20  environment etc. 

The theoretical and methodological background of the research is formed by, scientific researches and publications, that from the mass media and professional literature, statistical information from legal institutions as well as information collected by the author during the survey.

Keywords: E-environment, e-environment tools, technology acceptance model, factors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1500
96 Analysis of the Operational Performance of Three Unconventional Arterial Intersection Designs: Median U-Turn, Superstreet and Single Quadrant

Authors: Hana Naghawi, Khair Jadaan, Rabab Al-Louzi, Taqwa Hadidi

Abstract:

This paper is aimed to evaluate and compare the operational performance of three Unconventional Arterial Intersection Designs (UAIDs) including Median U-Turn, Superstreet, and Single Quadrant Intersection using real traffic data. For this purpose, the heavily congested signalized intersection of Wadi Saqra in Amman was selected. The effect of implementing each of the proposed UAIDs was not only evaluated on the isolated Wadi Saqra signalized intersection, but also on the arterial road including both surrounding intersections. The operational performance of the isolated intersection was based on the level of service (LOS) expressed in terms of control delay and volume to capacity ratio. On the other hand, the measures used to evaluate the operational performance on the arterial road included traffic progression, stopped delay per vehicle, number of stops and the travel speed. The analysis was performed using SYNCHRO 8 microscopic software. The simulation results showed that all three selected UAIDs outperformed the conventional intersection design in terms of control delay but only the Single Quadrant Intersection design improved the main intersection LOS from F to B. Also, the results indicated that the Single Quadrant Intersection design resulted in an increase in average travel speed by 52%, and a decrease in the average stopped delay by 34% on the selected corridor when compared to the corridor with conventional intersection design. On basis of these results, it can be concluded that the Median U-Turn and the Superstreet do not perform the best under heavy traffic volumes.

Keywords: Median U-turn, single quadrant, superstreet, unconventional arterial intersection design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 822
95 Heuristics Analysis for Distributed Scheduling using MONARC Simulation Tool

Authors: Florin Pop

Abstract:

Simulation is a very powerful method used for highperformance and high-quality design in distributed system, and now maybe the only one, considering the heterogeneity, complexity and cost of distributed systems. In Grid environments, foe example, it is hard and even impossible to perform scheduler performance evaluation in a repeatable and controllable manner as resources and users are distributed across multiple organizations with their own policies. In addition, Grid test-beds are limited and creating an adequately-sized test-bed is expensive and time consuming. Scalability, reliability and fault-tolerance become important requirements for distributed systems in order to support distributed computation. A distributed system with such characteristics is called dependable. Large environments, like Cloud, offer unique advantages, such as low cost, dependability and satisfy QoS for all users. Resource management in large environments address performant scheduling algorithm guided by QoS constrains. This paper presents the performance evaluation of scheduling heuristics guided by different optimization criteria. The algorithms for distributed scheduling are analyzed in order to satisfy users constrains considering in the same time independent capabilities of resources. This analysis acts like a profiling step for algorithm calibration. The performance evaluation is based on simulation. The simulator is MONARC, a powerful tool for large scale distributed systems simulation. The novelty of this paper consists in synthetic analysis results that offer guidelines for scheduler service configuration and sustain the empirical-based decision. The results could be used in decisions regarding optimizations to existing Grid DAG Scheduling and for selecting the proper algorithm for DAG scheduling in various actual situations.

Keywords: Scheduling, Simulation, Performance Evaluation, QoS, Distributed Systems, MONARC

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1720
94 Game-Theory-Based on Downlink Spectrum Allocation in Two-Tier Networks

Authors: Yu Zhang, Ye Tian, Fang Ye Yixuan Kang

Abstract:

The capacity of conventional cellular networks has reached its upper bound and it can be well handled by introducing femtocells with low-cost and easy-to-deploy. Spectrum interference issue becomes more critical in peace with the value-added multimedia services growing up increasingly in two-tier cellular networks. Spectrum allocation is one of effective methods in interference mitigation technology. This paper proposes a game-theory-based on OFDMA downlink spectrum allocation aiming at reducing co-channel interference in two-tier femtocell networks. The framework is formulated as a non-cooperative game, wherein the femto base stations are players and frequency channels available are strategies. The scheme takes full account of competitive behavior and fairness among stations. In addition, the utility function reflects the interference from the standpoint of channels essentially. This work focuses on co-channel interference and puts forward a negative logarithm interference function on distance weight ratio aiming at suppressing co-channel interference in the same layer network. This scenario is more suitable for actual network deployment and the system possesses high robustness. According to the proposed mechanism, interference exists only when players employ the same channel for data communication. This paper focuses on implementing spectrum allocation in a distributed fashion. Numerical results show that signal to interference and noise ratio can be obviously improved through the spectrum allocation scheme and the users quality of service in downlink can be satisfied. Besides, the average spectrum efficiency in cellular network can be significantly promoted as simulations results shown.

Keywords: Femtocell networks, game theory, interference mitigation, spectrum allocation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 702
93 The Flashnews as a Commercial Session of Political Marketing: The Content Analysis of the Embedded Political Narratives in Non-Political Media Products

Authors: Zsolt Szabolcsi

Abstract:

Political communication in Hungary has undergone a significant change in the 2010s. One element of the transformation is the Flashnews. This media product was launched in March 2015 and since then 40-50 blocks are broadcasted, daily, on 5 channels. Flashnews blocks are condensed news sessions, containing the summary of political narratives. It starts with the introduction of the narrator, then, usually four news topics are presented and, finally, the narrator concludes the block. The block lasts only one minute and, therefore, it provides a blink session into the main narratives of political communication at the time. Beyond its rapid pace, what makes its avoidance difficult is that these blocks are always in the first position in the commercial break of a non-political media product. Although it is only one minute long, its significance is high. The content of the Flashnews reflects the main governmental narratives and, therefore, the Flashnews is part of the agenda-setting capacity of political communication. It reaches media consumers who have limited knowledge and interest in politics, and their use of media products is not politically related. For this audience, the Flashnews pops up in the same way as commercials. Due to its structure and appearance, the impact of Flashnews seems to be similar to commercials, imbedded into the break of media products. It activates existing knowledge constructs, builds up associational links and maintains their presence in a way that the recipient is not aware of the phenomenon. The research aims to examine the extent to which the Flashnews and the main news narratives are identical in their content. This aim is realized with the content analysis of the two news products by examining the Flashnews and the evening news during main sport events from 2016 to 2018. The initial hypothesis of the research is that Flashnews is a contribution to the news management technique for an effective articulation of political narratives in public service media channels.

Keywords: Flashnews, political communication, political marketing, news management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 539
92 Assets Integrity Management in Oil and Gas Production Facilities Through Corrosion Mitigation and Inspection Strategy: A Case Study of Sarir Oilfield

Authors: Iftikhar Ahmad, Youssef Elkezza

Abstract:

Sarir oilfield is in North Africa. It has facilities of oil and gas production. The assets of the Sarir oilfield can be divided into five following categories, namely: (i) Well bore and wellheads; (ii) Vessels such as separators, desalters, and gas processing facilities; (iii) Pipelines including all flow lines, trunk lines, and shipping lines; (iv) storage tanks; (v) Other assets such as turbines and compressors, etc. The nature of the petroleum industry recognizes the potential human, environmental and financial consequences that can result from failing to maintain the integrity of wellheads, vessels, tanks, pipelines, and other assets. The importance of effective asset integrity management increases as the industry infrastructure continues to age. The primary objective of assets integrity management (AIM) is to maintain assets in a fit-for-service condition while extending their remaining life in the most reliable, safe, and cost-effective manner. Corrosion management is one of the important aspects of successful asset integrity management. It covers corrosion mitigation, monitoring, inspection, and risk evaluation. External corrosion on pipelines, well bores, buried assets, and bottoms of tanks is controlled with a combination of coatings by cathodic protection, while the external corrosion on surface equipment, wellheads, and storage tanks is controlled by coatings. The periodic cleaning of the pipeline by pigging helps in the prevention of internal corrosion. Further, internal corrosion of pipelines is prevented by chemical treatment and controlled operations. This paper describes the integrity management system used in the Sarir oil field for its oil and gas production facilities based on standard practices of corrosion mitigation and inspection.

Keywords: Assets integrity management, corrosion prevention in oilfield assets, corrosion management in oilfield, corrosion prevention and inspection activities.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 116
91 The Impact of Temporal Impairment on Quality of Experience (QoE) in Video Streaming: A No Reference (NR) Subjective and Objective Study

Authors: Muhammad Arslan Usman, Muhammad Rehan Usman, Soo Young Shin

Abstract:

Live video streaming is one of the most widely used service among end users, yet it is a big challenge for the network operators in terms of quality. The only way to provide excellent Quality of Experience (QoE) to the end users is continuous monitoring of live video streaming. For this purpose, there are several objective algorithms available that monitor the quality of the video in a live stream. Subjective tests play a very important role in fine tuning the results of objective algorithms. As human perception is considered to be the most reliable source for assessing the quality of a video stream subjective tests are conducted in order to develop more reliable objective algorithms. Temporal impairments in a live video stream can have a negative impact on the end users. In this paper we have conducted subjective evaluation tests on a set of video sequences containing temporal impairment known as frame freezing. Frame Freezing is considered as a transmission error as well as a hardware error which can result in loss of video frames on the reception side of a transmission system. In our subjective tests, we have performed tests on videos that contain a single freezing event and also for videos that contain multiple freezing events. We have recorded our subjective test results for all the videos in order to give a comparison on the available No Reference (NR) objective algorithms. Finally, we have shown the performance of no reference algorithms used for objective evaluation of videos and suggested the algorithm that works better. The outcome of this study shows the importance of QoE and its effect on human perception. The results for the subjective evaluation can serve the purpose for validating objective algorithms.

Keywords: Objective evaluation, subjective evaluation, quality of experience (QoE), video quality assessment (VQA).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1603
90 Life Cycle Assessment of Residential Buildings: A Case Study in Canada

Authors: Venkatesh Kumar, Kasun Hewage, Rehan Sadiq

Abstract:

Residential buildings consume significant amounts of energy and produce large amount of emissions and waste. However, there is a substantial potential for energy savings in this sector which needs to be evaluated over the life cycle of residential buildings. Life Cycle Assessment (LCA) methodology has been employed to study the primary energy uses and associated environmental impacts of different phases (i.e., product, construction, use, end of life, and beyond building life) for residential buildings. Four different alternatives of residential buildings in Vancouver (BC, Canada) with a 50-year lifespan have been evaluated, including High Rise Apartment (HRA), Low Rise Apartment (LRA), Single family Attached House (SAH), and Single family Detached House (SDH). Life cycle performance of the buildings is evaluated for embodied energy, embodied environmental impacts, operational energy, operational environmental impacts, total life-cycle energy, and total life cycle environmental impacts. Estimation of operational energy and LCA are performed using DesignBuilder software and Athena Impact estimator software respectively. The study results revealed that over the life span of the buildings, the relationship between the energy use and the environmental impacts are identical. LRA is found to be the best alternative in terms of embodied energy use and embodied environmental impacts; while, HRA showed the best life-cycle performance in terms of minimum energy use and environmental impacts. Sensitivity analysis has also been carried out to study the influence of building service lifespan over 50, 75, and 100 years on the relative significance of embodied energy and total life cycle energy. The life-cycle energy requirements for SDH are found to be a significant component among the four types of residential buildings. The overall disclose that the primary operations of these buildings accounts for 90% of the total life cycle energy which far outweighs minor differences in embodied effects between the buildings.

Keywords: Building simulation, environmental impacts, life cycle assessment, life cycle energy analysis, residential buildings.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5109
89 Software Architecture and Support for Patient Tracking Systems in Critical Scenarios

Authors: Gianluca Cornetta, Abdellah Touhafi, David J. Santos, Jose Manuel Vazquez

Abstract:

In this work a new platform for mobile-health systems is presented. System target application is providing decision support to rescue corps or military medical personnel in combat areas. Software architecture relies on a distributed client-server system that manages a wireless ad-hoc networks hierarchy in which several different types of client operate. Each client is characterized for different hardware and software requirements. Lower hierarchy levels rely in a network of completely custom devices that store clinical information and patient status and are designed to form an ad-hoc network operating in the 2.4 GHz ISM band and complying with the IEEE 802.15.4 standard (ZigBee). Medical personnel may interact with such devices, that are called MICs (Medical Information Carriers), by means of a PDA (Personal Digital Assistant) or a MDA (Medical Digital Assistant), and transmit the information stored in their local databases as well as issue a service request to the upper hierarchy levels by using IEEE 802.11 a/b/g standard (WiFi). The server acts as a repository that stores both medical evacuation forms and associated events (e.g., a teleconsulting request). All the actors participating in the diagnostic or evacuation process may access asynchronously to such repository and update its content or generate new events. The designed system pretends to optimise and improve information spreading and flow among all the system components with the aim of improving both diagnostic quality and evacuation process.

Keywords: IEEE 802.15.4 (ZigBee), IEEE 802.11 a/b/g (WiFi), distributed client-server systems, embedded databases, issue trackers, ad-hoc networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2011
88 A Comparison of Inverse Simulation-Based Fault Detection in a Simple Robotic Rover with a Traditional Model-Based Method

Authors: Murray L. Ireland, Kevin J. Worrall, Rebecca Mackenzie, Thaleia Flessa, Euan McGookin, Douglas Thomson

Abstract:

Robotic rovers which are designed to work in extra-terrestrial environments present a unique challenge in terms of the reliability and availability of systems throughout the mission. Should some fault occur, with the nearest human potentially millions of kilometres away, detection and identification of the fault must be performed solely by the robot and its subsystems. Faults in the system sensors are relatively straightforward to detect, through the residuals produced by comparison of the system output with that of a simple model. However, faults in the input, that is, the actuators of the system, are harder to detect. A step change in the input signal, caused potentially by the loss of an actuator, can propagate through the system, resulting in complex residuals in multiple outputs. These residuals can be difficult to isolate or distinguish from residuals caused by environmental disturbances. While a more complex fault detection method or additional sensors could be used to solve these issues, an alternative is presented here. Using inverse simulation (InvSim), the inputs and outputs of the mathematical model of the rover system are reversed. Thus, for a desired trajectory, the corresponding actuator inputs are obtained. A step fault near the input then manifests itself as a step change in the residual between the system inputs and the input trajectory obtained through inverse simulation. This approach avoids the need for additional hardware on a mass- and power-critical system such as the rover. The InvSim fault detection method is applied to a simple four-wheeled rover in simulation. Additive system faults and an external disturbance force and are applied to the vehicle in turn, such that the dynamic response and sensor output of the rover are impacted. Basic model-based fault detection is then employed to provide output residuals which may be analysed to provide information on the fault/disturbance. InvSim-based fault detection is then employed, similarly providing input residuals which provide further information on the fault/disturbance. The input residuals are shown to provide clearer information on the location and magnitude of an input fault than the output residuals. Additionally, they can allow faults to be more clearly discriminated from environmental disturbances.

Keywords: Fault detection, inverse simulation, rover, ground robot.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 907
87 Mechanical Properties of Enset Fibers Obtained from Different Breeds of Enset Plant

Authors: Diriba T. Balcha, Boris Kulig, Oliver Hensel, Eyassu Woldesenbet

Abstract:

Enset fiber is agricultural waste and available in a surplus amount in Ethiopia. However, the hypothesized variation in properties of this fiber due to diversity of its plant source breed, fiber position within plant stem and chemical treatment duration had not proven that its application for the development of composite products is problematic. Currently, limited data are known on the functional properties of the fiber as a potential functional fiber. Thus, an effort is made in this study to narrow the knowledge gaps by characterizing it. The experimental design was conducted using Design-Expert software and the tensile test was conducted on Enset fiber from 10 breeds: Dego, Dirbo, Gishera, Itine, Siskela, Neciho, Yesherkinke, Tuzuma, Ankogena, and Kucharkia. The effects of 5% Na-OH surface treatment duration and fiber location along and across the plant pseudostem was also investigated. The test result shows that the rupture stress variation is not significant among the fibers from 10 Enset breeds. However, strain variation is significant among the fibers from 10 Enset breeds that breed Dego fiber has the highest strain before failure. Surface treated fibers showed improved rupture strength and elastic modulus per 24 hours of treatment duration. Also, the result showed that chemical treatment can deteriorate the load-bearing capacity of the fiber. The raw fiber has the higher load-bearing capacity than the treated fiber. And, it was noted that both the rupture stress and strain increase in the top to bottom gradient, whereas there is no significant variation across the stem. Elastic modulus variation both along and across the stem was insignificant. The rupture stress, elastic modulus, and strain result of Enset fiber are 360.11 ± 181.86 MPa, 12.80 ± 6.85 GPa and 0.04 ± 0.02 mm/mm, respectively. These results show that Enset fiber is comparable to other natural fibers such as abaca, banana, and sisal fibers and can be used as alternatives natural fiber for composites application. Besides, the insignificant variation of properties among breeds and across stem is essential for all breeds and all leaf sheath of the Enset fiber plant for fiber extraction. The use of short natural fiber over the long is preferable to reduce the significant variation of properties along the stem or fiber direction. In conclusion, Enset fiber application for composite product design and development is mechanically feasible.

Keywords: Agricultural waste, chemical treatment, fiber characteristics, natural fiber.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 680
86 Biaxial Testing of Fabrics - A Comparison of Various Testing Methodologies

Authors: O.B. Ozipek, E. Bozdag, E. Sunbuloglu, A. Abdullahoglu, E. Belen, E. Celikkanat

Abstract:

In textile industry, besides the conventional textile products, technical textile goods, that have been brought external functional properties into, are being developed for technical textile industry. Especially these products produced with weaving technology are widely preferred in areas such as sports, geology, medical, automotive, construction and marine sectors. These textile products are exposed to various stresses and large deformations under typical conditions of use. At this point, sufficient and reliable data could not be obtained with uniaxial tensile tests for determination of the mechanical properties of such products due to mainly biaxial stress state. Therefore, the most preferred method is a biaxial tensile test method and analysis. These tests and analysis is applied to fabrics with different functional features in order to establish the textile material with several characteristics and mechanical properties of the product. Planar biaxial tensile test, cylindrical inflation and bulge tests are generally required to apply for textile products that are used in automotive, sailing and sports areas and construction industry to minimize accidents as long as their service life. Airbags, seat belts and car tires in the automotive sector are also subject to the same biaxial stress states, and can be characterized by same types of experiments. In this study, in accordance with the research literature related to the various biaxial test methods are compared. Results with discussions are elaborated mainly focusing on the design of a biaxial test apparatus to obtain applicable experimental data for developing a finite element model. Sample experimental results on a prototype system are expressed.

Keywords: Biaxial Stress, Bulge Test, Cylindrical Inflation, Fabric Testing, Planar Tension.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4105
85 Ultrasonic System for Diagnosis of Functional Gastrointestinal Disorders: Development, Verification and Clinical Trials

Authors: Eun-Geun Kim, Won-Pil Park, Dae-Gon Woo, Chang-Yong Ko, Yong-Heum Lee, Dohyung Lim, Tae-Min Shin, Han-Sung Kim, Gyoun-Jung Lee

Abstract:

Functional gastrointestinal disorders affect millions of people spread all age regardless of race and sex. There are, however, rare diagnostic methods for the functional gastrointestinal disorders because functional disorders show no evidence of organic and physical causes. Our research group identified recently that the gastrointestinal tract well in the patients with the functional gastrointestinal disorders becomes more rigid than healthy people when palpating the abdominal regions overlaying the gastrointestinal tract. Aim of this study is, therefore, to develop a diagnostic system for the functional gastrointestinal disorders based on ultrasound technique, which can quantify the characteristic above related to the rigidity of the gastrointestinal tract well. Ultrasound system was designed. The system consisted of transmitter, ultrasonic transducer, receiver, TGC, and CPLD, and verified via a phantom test. For the phantom test, ten soft-tissue specimens were harvested from porcine. Five of them were then treated chemically to mimic a rigid condition of gastrointestinal tract well, which was induced by functional gastrointestinal disorders. Additionally, the specimens were tested mechanically to identify if the mimic was reasonable. The customized ultrasound system was finally verified through application to human subjects with/without functional gastrointestinal disorders (Normal and Patient Groups). It was identified from the mechanical test that the chemically treated specimens were more rigid than normal specimen. This finding was favorably compared with the result obtained from the phantom test. The phantom test also showed that ultrasound system well described the specimen geometric characteristics and detected an alteration in the specimens. The maximum amplitude of the ultrasonic reflective signal in the rigid specimens (0.2±0.1Vp-p) at the interface between the fat and muscle layers was explicitly higher than that in the normal specimens (0.1±0.0Vp-p). Clinical tests using our customized ultrasound system for human subject showed that the maximum amplitudes of the ultrasonic reflective signals near to the gastrointestinal tract well for the patient group (2.6±0.3Vp-p) were generally higher than those in normal group (0.1±0.2Vp-p). Here, maximum reflective signals was appeared at 20mm depth approximately from abdominal skin for all human subjects, corresponding to the location of the boundary layer close to gastrointestinal tract well. These results suggest that newly designed diagnostic system based on ultrasound technique may diagnose enough the functional gastrointestinal disorders.

Keywords: Functional Gastrointestinal Disorders, DiagnosticSystem, Phantom Test, Ultrasound System.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1760
84 Nutritional Value Determination of Different Varieties of Oats and Barley Using Near-Infrared Spectroscopy Method for the Horses Nutrition

Authors: V. Viliene, V. Sasyte, A. Raceviciute-Stupeliene, R. Gruzauskas

Abstract:

In horse nutrition, the most suitable cereal for their rations composition could be defined as oats and barley. Oats have high nutritive value because it provides more protein, fiber, iron and zinc than other whole grains, has good taste, and an activity of stimulating metabolic changes in the body. Another cereal – barley is very similar to oats as a feed except for some characteristics that affect how it is used; however, barley is lower in fiber than oats and is classified as a "heavy" feed. The value of oats and barley grain, first of all is dependent on its composition. Near-infrared spectroscopy (NIRS) has long been considered and used as a significant method in component and quality analysis and as an emerging technology for authenticity applications for cereal quality control. This paper presents the chemical and amino acid composition of different varieties of barley and oats, also digestible energy of different cereals for horses. Ten different spring barley (n = 5) and oats (n = 5) varieties, grown in one location in Lithuania, were assayed for their chemical composition (dry matter, crude protein, crude fat, crude ash, crude fiber, starch) and amino acids content, digestible amino acids and amino acids digestibility. Also, the grains digestible energy for horses was calculated. The oats and barley samples reflectance spectra were measured by means of NIRS using Foss-Tecator DS2500 equipment. The chemical components: fat, crude protein, starch and fiber differed statistically (P<0.05) between the oats and barley varieties. The highest total amino acid content between oats was determined in variety Flamingsprofi (4.56 g/kg) and the lowest – variety Circle (3.57 g/kg), and between barley - respectively in varieties Publican (3.50 g/kg) and Sebastian (3.11 g/kg). The different varieties of oats digestible amino acid content varied from 3.11 g/kg to 4.07 g/kg; barley different varieties varied from 2.59 g/kg to 2.94 g/kg. The average amino acids digestibility of oats varied from 74.4% (Liz) to 95.6% (Fen) and in barley - from 75.8 % (Tre) to 89.6% (Fen). The amount of digestible energy in the analyzed varieties of oats and barley was an average compound 13.74 MJ/kg DM and 14.85 MJ/kg DM, respectively. An analysis of the results showed that different varieties of oats compared with barley are preferable for horse nutrition according to the crude fat, crude fiber, ash and separate amino acids content, but the analyzed barley varieties dominated the higher amounts of crude protein, the digestible Liz amount and higher DE content, and thus, could be recommended for making feed formulation for horses combining oats and barley, taking into account the chemical composition of using cereal varieties.

Keywords: Barley, digestive energy, horses, nutritional value, oats.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2214
83 Selective Encryption using ISMA Cryp in Real Time Video Streaming of H.264/AVC for DVB-H Application

Authors: Jay M. Joshi, Upena D. Dalal

Abstract:

Multimedia information availability has increased dramatically with the advent of video broadcasting on handheld devices. But with this availability comes problems of maintaining the security of information that is displayed in public. ISMA Encryption and Authentication (ISMACryp) is one of the chosen technologies for service protection in DVB-H (Digital Video Broadcasting- Handheld), the TV system for portable handheld devices. The ISMACryp is encoded with H.264/AVC (advanced video coding), while leaving all structural data as it is. Two modes of ISMACryp are available; the CTR mode (Counter type) and CBC mode (Cipher Block Chaining) mode. Both modes of ISMACryp are based on 128- bit AES algorithm. AES algorithms are more complex and require larger time for execution which is not suitable for real time application like live TV. The proposed system aims to gain a deep understanding of video data security on multimedia technologies and to provide security for real time video applications using selective encryption for H.264/AVC. Five level of security proposed in this paper based on the content of NAL unit in Baseline Constrain profile of H.264/AVC. The selective encryption in different levels provides encryption of intra-prediction mode, residue data, inter-prediction mode or motion vectors only. Experimental results shown in this paper described that fifth level which is ISMACryp provide higher level of security with more encryption time and the one level provide lower level of security by encrypting only motion vectors with lower execution time without compromise on compression and quality of visual content. This encryption scheme with compression process with low cost, and keeps the file format unchanged with some direct operations supported. Simulation was being carried out in Matlab.

Keywords: AES-128, CAVLC, H.264, ISMACryp

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2015
82 Blueprinting of a Normalized Supply Chain Processes: Results in Implementing Normalized Software Systems

Authors: Bassam Istanbouli

Abstract:

With the technology evolving every day and with the increase in global competition, industries are always under the pressure to be the best. They need to provide good quality products at competitive prices, when and how the customer wants them.  In order to achieve this level of service, products and their respective supply chain processes need to be flexible and evolvable; otherwise changes will be extremely expensive, slow and with many combinatorial effects. Those combinatorial effects impact the whole organizational structure, from a management, financial, documentation, logistics and specially the information system Enterprise Requirement Planning (ERP) perspective. By applying the normalized system concept/theory to segments of the supply chain, we believe minimal effects, especially at the time of launching an organization global software project. The purpose of this paper is to point out that if an organization wants to develop a software from scratch or implement an existing ERP software for their business needs and if their business processes are normalized and modular then most probably this will yield to a normalized and modular software system that can be easily modified when the business evolves. Another important goal of this paper is to increase the awareness regarding the design of the business processes in a software implementation project. If the blueprints created are normalized then the software developers and configurators will use those modular blueprints to map them into modular software. This paper only prepares the ground for further studies;  the above concept will be supported by going through the steps of developing, configuring and/or implementing a software system for an organization by using two methods: The Software Development Lifecycle method (SDLC) and the Accelerated SAP implementation method (ASAP). Both methods start with the customer requirements, then blue printing of its business processes and finally mapping those processes into a software system.  Since those requirements and processes are the starting point of the implementation process, then normalizing those processes will end up in a normalizing software.

Keywords: Blueprint, ERP, SDLC, Modular.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 354
81 An Optimal Control Method for Reconstruction of Topography in Dam-Break Flows

Authors: Alia Alghosoun, Nabil El Moçayd, Mohammed Seaid

Abstract:

Modeling dam-break flows over non-flat beds requires an accurate representation of the topography which is the main source of uncertainty in the model. Therefore, developing robust and accurate techniques for reconstructing topography in this class of problems would reduce the uncertainty in the flow system. In many hydraulic applications, experimental techniques have been widely used to measure the bed topography. In practice, experimental work in hydraulics may be very demanding in both time and cost. Meanwhile, computational hydraulics have served as an alternative for laboratory and field experiments. Unlike the forward problem, the inverse problem is used to identify the bed parameters from the given experimental data. In this case, the shallow water equations used for modeling the hydraulics need to be rearranged in a way that the model parameters can be evaluated from measured data. However, this approach is not always possible and it suffers from stability restrictions. In the present work, we propose an adaptive optimal control technique to numerically identify the underlying bed topography from a given set of free-surface observation data. In this approach, a minimization function is defined to iteratively determine the model parameters. The proposed technique can be interpreted as a fractional-stage scheme. In the first stage, the forward problem is solved to determine the measurable parameters from known data. In the second stage, the adaptive control Ensemble Kalman Filter is implemented to combine the optimality of observation data in order to obtain the accurate estimation of the topography. The main features of this method are on one hand, the ability to solve for different complex geometries with no need for any rearrangements in the original model to rewrite it in an explicit form. On the other hand, its achievement of strong stability for simulations of flows in different regimes containing shocks or discontinuities over any geometry. Numerical results are presented for a dam-break flow problem over non-flat bed using different solvers for the shallow water equations. The robustness of the proposed method is investigated using different numbers of loops, sensitivity parameters, initial samples and location of observations. The obtained results demonstrate high reliability and accuracy of the proposed techniques.

Keywords: Optimal control, ensemble Kalman Filter, topography reconstruction, data assimilation, shallow water equations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 636
80 Low Overhead Dynamic Channel Selection with Cluster-Based Spatial-Temporal Station Reporting in Wireless Networks

Authors: Zeyad Abdelmageid, Xianbin Wang

Abstract:

Choosing the operational channel for a WLAN access point (AP) in WLAN networks has been a static channel assignment process initiated by the user during the deployment process of the AP, which fails to cope with the dynamic conditions of the assigned channel at the station side afterwards. However, the dramatically growing number of Wi-Fi APs and stations operating in the unlicensed band has led to dynamic, distributed and often severe interference. This highlights the urgent need for the AP to dynamically select the best overall channel of operation for the basic service set (BSS) by considering the distributed and changing channel conditions at all stations. Consequently, dynamic channel selection algorithms which consider feedback from the station side have been developed. Despite the significant performance improvement, existing channel selection algorithms suffer from very high feedback overhead. Feedback latency from the STAs, due the high overhead, can cause the eventually selected channel to no longer be optimal for operation due to the dynamic sharing nature of the unlicensed band. This has inspired us to develop our own dynamic channel selection algorithm with reduced overhead through the proposed low-overhead, cluster-based station reporting mechanism. The main idea behind the cluster-based station reporting is the observation that STAs which are very close to each other tend to have very similar channel conditions. Instead of requesting each STA to report on every candidate channel while causing high overhead, the AP divides STAs into clusters then assigns each STA in each cluster one channel to report feedback on. With proper design of the cluster based reporting, the AP does not lose any information about the channel conditions at the station side while reducing feedback overhead. The simulation results show equal performance and at times better performance with a fraction of the overhead. We believe that this algorithm has great potential in designing future dynamic channel selection algorithms with low overhead.

Keywords: Channel assignment, Wi-Fi networks, clustering, DBSCAN, overhead.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 320
79 An Estimation of Rice Output Supply Response in Sierra Leone: A Nerlovian Model Approach

Authors: Alhaji M. H. Conteh, Xiangbin Yan, Issa Fofana, Brima Gegbe, Tamba I. Isaac

Abstract:

Rice grain is Sierra Leone’s staple food and the nation imports over 120,000 metric tons annually due to a shortfall in its cultivation. Thus, the insufficient level of the crop's cultivation in Sierra Leone is caused by many problems and this led to the endlessly widening supply and demand for the crop within the country. Consequently, this has instigated the government to spend huge money on the importation of this grain that would have been otherwise cultivated domestically at a cheaper cost. Hence, this research attempts to explore the response of rice supply with respect to its demand in Sierra Leone within the period 1980-2010. The Nerlovian adjustment model to the Sierra Leone rice data set within the period 1980-2010 was used. The estimated trend equations revealed that time had significant effect on output, productivity (yield) and area (acreage) of rice grain within the period 1980-2010 and this occurred generally at the 1% level of significance. The results showed that, almost the entire growth in output had the tendency to increase in the area cultivated to the crop. The time trend variable that was included for government policy intervention showed an insignificant effect on all the variables considered in this research. Therefore, both the short-run and long-run price response was inelastic since all their values were less than one. From the findings above, immediate actions that will lead to productivity growth in rice cultivation are required. To achieve the above, the responsible agencies should provide extension service schemes to farmers as well as motivating them on the adoption of modern rice varieties and technology in their rice cultivation ventures.

Keywords: Nerlovian adjustment model, price elasticities, Sierra Leone, Trend equations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2752
78 Simulated Annealing Algorithm for Data Aggregation Trees in Wireless Sensor Networks and Comparison with Genetic Algorithm

Authors: Ladan Darougaran, Hossein Shahinzadeh, Hajar Ghotb, Leila Ramezanpour

Abstract:

In ad hoc networks, the main issue about designing of protocols is quality of service, so that in wireless sensor networks the main constraint in designing protocols is limited energy of sensors. In fact, protocols which minimize the power consumption in sensors are more considered in wireless sensor networks. One approach of reducing energy consumption in wireless sensor networks is to reduce the number of packages that are transmitted in network. The technique of collecting data that combines related data and prevent transmission of additional packages in network can be effective in the reducing of transmitted packages- number. According to this fact that information processing consumes less power than information transmitting, Data Aggregation has great importance and because of this fact this technique is used in many protocols [5]. One of the Data Aggregation techniques is to use Data Aggregation tree. But finding one optimum Data Aggregation tree to collect data in networks with one sink is a NP-hard problem. In the Data Aggregation technique, related information packages are combined in intermediate nodes and form one package. So the number of packages which are transmitted in network reduces and therefore, less energy will be consumed that at last results in improvement of longevity of network. Heuristic methods are used in order to solve the NP-hard problem that one of these optimization methods is to solve Simulated Annealing problems. In this article, we will propose new method in order to build data collection tree in wireless sensor networks by using Simulated Annealing algorithm and we will evaluate its efficiency whit Genetic Algorithm.

Keywords: Data aggregation, wireless sensor networks, energy efficiency, simulated annealing algorithm, genetic algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1651
77 Lexical Based Method for Opinion Detection on Tripadvisor Collection

Authors: Faiza Belbachir, Thibault Schienhinski

Abstract:

The massive development of online social networks allows users to post and share their opinions on various topics. With this huge volume of opinion, it is interesting to extract and interpret these information for different domains, e.g., product and service benchmarking, politic, system of recommendation. This is why opinion detection is one of the most important research tasks. It consists on differentiating between opinion data and factual data. The difficulty of this task is to determine an approach which returns opinionated document. Generally, there are two approaches used for opinion detection i.e. Lexical based approaches and Machine Learning based approaches. In Lexical based approaches, a dictionary of sentimental words is used, words are associated with weights. The opinion score of document is derived by the occurrence of words from this dictionary. In Machine learning approaches, usually a classifier is trained using a set of annotated document containing sentiment, and features such as n-grams of words, part-of-speech tags, and logical forms. Majority of these works are based on documents text to determine opinion score but dont take into account if these texts are really correct. Thus, it is interesting to exploit other information to improve opinion detection. In our work, we will develop a new way to consider the opinion score. We introduce the notion of trust score. We determine opinionated documents but also if these opinions are really trustable information in relation with topics. For that we use lexical SentiWordNet to calculate opinion and trust scores, we compute different features about users like (numbers of their comments, numbers of their useful comments, Average useful review). After that, we combine opinion score and trust score to obtain a final score. We applied our method to detect trust opinions in TRIPADVISOR collection. Our experimental results report that the combination between opinion score and trust score improves opinion detection.

Keywords: Tripadvisor, Opinion detection, SentiWordNet, trust score.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 711
76 Lightweight and Seamless Distributed Scheme for the Smart Home

Authors: Muhammad Mehran Arshad Khan, Chengliang Wang, Zou Minhui, Danyal Badar Soomro

Abstract:

Security of the smart home in terms of behavior activity pattern recognition is a totally dissimilar and unique issue as compared to the security issues of other scenarios. Sensor devices (low capacity and high capacity) interact and negotiate each other by detecting the daily behavior activity of individuals to execute common tasks. Once a device (e.g., surveillance camera, smart phone and light detection sensor etc.) is compromised, an adversary can then get access to a specific device and can damage daily behavior activity by altering the data and commands. In this scenario, a group of common instruction processes may get involved to generate deadlock. Therefore, an effective suitable security solution is required for smart home architecture. This paper proposes seamless distributed Scheme which fortifies low computational wireless devices for secure communication. Proposed scheme is based on lightweight key-session process to upheld cryptic-link for trajectory by recognizing of individual’s behavior activities pattern. Every device and service provider unit (low capacity sensors (LCS) and high capacity sensors (HCS)) uses an authentication token and originates a secure trajectory connection in network. Analysis of experiments is revealed that proposed scheme strengthens the devices against device seizure attack by recognizing daily behavior activities, minimum utilization memory space of LCS and avoids network from deadlock. Additionally, the results of a comparison with other schemes indicate that scheme manages efficiency in term of computation and communication.

Keywords: Authentication, key-session, security, wireless sensors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 845