Search results for: hyperspectral image classification using tree search algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9608

Search results for: hyperspectral image classification using tree search algorithm

5438 Improving the Efficiency of a High Pressure Turbine by Using Non-Axisymmetric Endwall: A Comparison of Two Optimization Algorithms

Authors: Abdul Rehman, Bo Liu

Abstract:

Axial flow turbines are commonly designed with high loads that generate strong secondary flows and result in high secondary losses. These losses contribute to almost 30% to 50% of the total losses. Non-axisymmetric endwall profiling is one of the passive control technique to reduce the secondary flow loss. In this paper, the non-axisymmetric endwall profile construction and optimization for the stator endwalls are presented to improve the efficiency of a high pressure turbine. The commercial code NUMECA Fine/ Design3D coupled with Fine/Turbo was used for the numerical investigation, design of experiments and the optimization. All the flow simulations were conducted by using steady RANS and Spalart-Allmaras as a turbulence model. The non-axisymmetric endwalls of stator hub and shroud were created by using the perturbation law based on Bezier Curves. Each cut having multiple control points was supposed to be created along the virtual streamlines in the blade channel. For the design of experiments, each sample was arbitrarily generated based on values automatically chosen for the control points defined during parameterization. The Optimization was achieved by using two algorithms i.e. the stochastic algorithm and gradient-based algorithm. For the stochastic algorithm, a genetic algorithm based on the artificial neural network was used as an optimization method in order to achieve the global optimum. The evaluation of the successive design iterations was performed using artificial neural network prior to the flow solver. For the second case, the conjugate gradient algorithm with a three dimensional CFD flow solver was used to systematically vary a free-form parameterization of the endwall. This method is efficient and less time to consume as it requires derivative information of the objective function. The objective function was to maximize the isentropic efficiency of the turbine by keeping the mass flow rate as constant. The performance was quantified by using a multi-objective function. Other than these two classifications of the optimization methods, there were four optimizations cases i.e. the hub only, the shroud only, and the combination of hub and shroud. For the fourth case, the shroud endwall was optimized by using the optimized hub endwall geometry. The hub optimization resulted in an increase in the efficiency due to more homogenous inlet conditions for the rotor. The adverse pressure gradient was reduced but the total pressure loss in the vicinity of the hub was increased. The shroud optimization resulted in an increase in efficiency, total pressure loss and entropy were reduced. The combination of hub and shroud did not show overwhelming results which were achieved for the individual cases of the hub and the shroud. This may be caused by fact that there were too many control variables. The fourth case of optimization showed the best result because optimized hub was used as an initial geometry to optimize the shroud. The efficiency was increased more than the individual cases of optimization with a mass flow rate equal to the baseline design of the turbine. The results of artificial neural network and conjugate gradient method were compared.

Keywords: artificial neural network, axial turbine, conjugate gradient method, non-axisymmetric endwall, optimization

Procedia PDF Downloads 226
5437 Ambiguity Resolution for Ground-based Pulse Doppler Radars Using Multiple Medium Pulse Repetition Frequency

Authors: Khue Nguyen Dinh, Loi Nguyen Van, Thanh Nguyen Nhu

Abstract:

In this paper, we propose an adaptive method to resolve ambiguities and a ghost target removal process to extract targets detected by a ground-based pulse-Doppler radar using medium pulse repetition frequency (PRF) waveforms. The ambiguity resolution method is an adaptive implementation of the coincidence algorithm, which is implemented on a two-dimensional (2D) range-velocity matrix to resolve range and velocity ambiguities simultaneously, with a proposed clustering filter to enhance the anti-error ability of the system. Here we consider the scenario of multiple target environments. The ghost target removal process, which is based on the power after Doppler processing, is proposed to mitigate ghosting detections to enhance the performance of ground-based radars using a short PRF schedule in multiple target environments. Simulation results on a ground-based pulsed Doppler radar model will be presented to show the effectiveness of the proposed approach.

Keywords: ambiguity resolution, coincidence algorithm, medium PRF, ghosting removal

Procedia PDF Downloads 157
5436 A Rotating Facility with High Temporal and Spatial Resolution Particle Image Velocimetry System to Investigate the Turbulent Boundary Layer Flow

Authors: Ruquan You, Haiwang Li, Zhi Tao

Abstract:

A time-resolved particle image velocimetry (PIV) system is developed to investigate the boundary layer flow with the effect of rotating Coriolis and buoyancy force. This time-resolved PIV system consists of a 10 Watts continuous laser diode and a high-speed camera. The laser diode is able to provide a less than 1mm thickness sheet light, and the high-speed camera can capture the 6400 frames per second with 1024×1024 pixels. The whole laser and the camera are fixed on the rotating facility with 1 radius meters and up to 500 revolutions per minute, which can measure the boundary flow velocity in the rotating channel with and without ribs directly at rotating conditions. To investigate the effect of buoyancy force, transparent heater glasses are used to provide the constant thermal heat flux, and then the density differences are generated near the channel wall, and the buoyancy force can be simulated when the channel is rotating. Due to the high temporal and spatial resolution of the system, the proper orthogonal decomposition (POD) can be developed to analyze the characteristic of the turbulent boundary layer flow at rotating conditions. With this rotating facility and PIV system, the velocity profile, Reynolds shear stress, spatial and temporal correlation, and the POD modes of the turbulent boundary layer flow can be discussed.

Keywords: rotating facility, PIV, boundary layer flow, spatial and temporal resolution

Procedia PDF Downloads 184
5435 The Ecosystem of Food Allergy Clinical Trials: A Systematic Review

Authors: Eimar Yadir Quintero Tapias

Abstract:

Background: Science is not generally self-correcting; many clinical studies end with the same conclusion "more research is needed." This study hypothesizes that first, we need a better appraisal of the available (and unavailable) evidence instead of creating more of the same false inquiries. Methods: Systematic review of ClinicalTrials.gov study records using the following Boolean operators: (food OR nut OR milk OR egg OR shellfish OR wheat OR peanuts) AND (allergy OR allergies OR hypersensitivity OR hypersensitivities). Variables included the status of the study (e g., active and completed), availability of results, sponsor type, sample size, among others. To determine the rates of non-publication in journals indexed by PubMed, an advanced search query using the specific Number of Clinical Trials (e.g., NCT000001 OR NCT000002 OR...) was performed. As a prophylactic measure to prevent P-hacking, data analyses only included descriptive statistics and not inferential approaches. Results: A total of 2092 study records matched the search query described above (date: September 13, 2019). Most studies were interventional (n = 1770; 84.6%) and the remainder observational (n = 322; 15.4%). Universities, hospitals, and research centers sponsored over half of these investigations (n = 1208; 57.7%), 308 studies (14.7%) were industry-funded, and 147 received NIH grants; the remaining studies got mixed sponsorship. Regarding completed studies (n = 1156; 55.2%), 248 (21.5%) have results available at the registry site, and 417 (36.1%) matched NCT numbers of journal papers indexed by PubMed. Conclusions: The internal and external validity of human research is critical for the appraisal of medical evidence. It is imperative to analyze the entire dataset of clinical studies, preferably at a patient-level anonymized raw data, before rushing to conclusions with insufficient and inadequate information. Publication bias and non-registration of clinical trials limit the evaluation of the evidence concerning therapeutic interventions for food allergy, such as oral and sublingual immunotherapy, as well as any other medical condition. Over half of the food allergy human research remains unpublished.

Keywords: allergy, clinical trials, immunology, systematic reviews

Procedia PDF Downloads 140
5434 Modification Encryption Time and Permutation in Advanced Encryption Standard Algorithm

Authors: Dalal N. Hammod, Ekhlas K. Gbashi

Abstract:

Today, cryptography is used in many applications to achieve high security in data transmission and in real-time communications. AES has long gained global acceptance and is used for securing sensitive data in various industries but has suffered from slow processing and take a large time to transfer data. This paper suggests a method to enhance Advance Encryption Standard (AES) Algorithm based on time and permutation. The suggested method (MAES) is based on modifying the SubByte and ShiftRrows in the encryption part and modification the InvSubByte and InvShiftRows in the decryption part. After the implementation of the proposal and testing the results, the Modified AES achieved good results in accomplishing the communication with high performance criteria in terms of randomness, encryption time, storage space, and avalanche effects. The proposed method has good randomness to ciphertext because this method passed NIST statistical tests against attacks; also, (MAES) reduced the encryption time by (10 %) than the time of the original AES; therefore, the modified AES is faster than the original AES. Also, the proposed method showed good results in memory utilization where the value is (54.36) for the MAES, but the value for the original AES is (66.23). Also, the avalanche effects used for calculating diffusion property are (52.08%) for the modified AES and (51.82%) percentage for the original AES.

Keywords: modified AES, randomness test, encryption time, avalanche effects

Procedia PDF Downloads 250
5433 Effects of Pulsed Electromagnetic and Static Magnetic Fields on Musculoskeletal Low Back Pain: A Systematic Review Approach

Authors: Mohammad Javaherian, Siamak Bashardoust Tajali, Monavvar Hadizadeh

Abstract:

Objective: This systematic review study was conducted to evaluate the effects of Pulsed Electromagnetic (PEMF) and Static Magnetic Fields (SMG) on pain relief and functional improvement in patients with musculoskeletal Low Back Pain (LBP). Methods: Seven electronic databases were searched by two researchers independently to identify the published Randomized Controlled Trials (RCTs) on the efficacy of pulsed electromagnetic, static magnetic, and therapeutic nuclear magnetic fields. The identified databases for systematic search were Ovid Medline®, Ovid Cochrane RCTs and Reviews, PubMed, Web of Science, Cochrane Library, CINAHL, and EMBASE from 1968 to February 2016. The relevant keywords were selected by Mesh. After initial search and finding relevant manuscripts, all references in selected studies were searched to identify second hand possible manuscripts. The published RCTs in English would be included to the study if they reported changes on pain and/or functional disability following application of magnetic fields on chronic musculoskeletal low back pain. All studies with surgical patients, patients with pelvic pain, and combination of other treatment techniques such as acupuncture or diathermy were excluded. The identified studies were critically appraised and the data were extracted independently by two raters (M.J and S.B.T). Probable disagreements were resolved through discussion between raters. Results: In total, 1505 abstracts were found following the initial electronic search. The abstracts were reviewed to identify potentially relevant manuscripts. Seventeen possibly appropriate studies were retrieved in full-text of which 48 were excluded after reviewing their full-texts. Ten selected articles were categorized into three subgroups: PEMF (6 articles), SMF (3 articles), and therapeutic nuclear magnetic fields (tNMF) (1 article). Since one study evaluated tNMF, we had to exclude it. In the PEMF group, one study of acute LBP did not show significant positive results and the majority of the other five studies on Chronic Low Back Pain (CLBP) indicated its efficacy on pain relief and functional improvement, but one study with the lowest sessions (6 sessions during 2 weeks) did not report a significant difference between treatment and control groups. In the SMF subgroup, two articles reported near significant pain reduction without any functional improvement although more studies are needed. Conclusion: The PEMFs with a strength of 5 to 150 G or 0.1 to 0.3 G and a frequency of 5 to 64 Hz or sweep 7 to 7KHz can be considered as an effective modality in pain relief and functional improvement in patients with chronic low back pain, but there is not enough evidence to confirm their effectiveness in acute low back pain. To achieve the appropriate effectiveness, it is suggested to perform this treatment modality 20 minutes per day for at least 9 sessions. SMFs have not been reported to be substantially effective in decreasing pain or improving the function in chronic low back pain. More studies are necessary to achieve more reliable results.

Keywords: pulsed electromagnetic field, static magnetic field, magnetotherapy, low back pain

Procedia PDF Downloads 208
5432 Enhancing Throughput for Wireless Multihop Networks

Authors: K. Kalaiarasan, B. Pandeeswari, A. Arockia John Francis

Abstract:

Wireless, Multi-hop networks consist of one or more intermediate nodes along the path that receive and forward packets via wireless links. The backpressure algorithm provides throughput optimal routing and scheduling decisions for multi-hop networks with dynamic traffic. Xpress, a cross-layer backpressure architecture was designed to reach the capacity of wireless multi-hop networks and it provides well coordination between layers of network by turning a mesh network into a wireless switch. Transmission over the network is scheduled using a throughput-optimal backpressure algorithm. But this architecture operates much below their capacity due to out-of-order packet delivery and variable packet size. In this paper, we present Xpress-T, a throughput optimal backpressure architecture with TCP support designed to reach maximum throughput of wireless multi-hop networks. Xpress-T operates at the IP layer, and therefore any transport protocol, including TCP, can run on top of Xpress-T. The proposed design not only avoids bottlenecks but also handles out-of-order packet delivery and variable packet size, optimally load-balances traffic across them when needed, improving fairness among competing flows. Our simulation results shows that Xpress-T gives 65% more throughput than Xpress.

Keywords: backpressure scheduling and routing, TCP, congestion control, wireless multihop network

Procedia PDF Downloads 524
5431 Automatic Adjustment of Thresholds via Closed-Loop Feedback Mechanism for Solder Paste Inspection

Authors: Chia-Chen Wei, Pack Hsieh, Jeffrey Chen

Abstract:

Surface Mount Technology (SMT) is widely used in the area of the electronic assembly in which the electronic components are mounted to the surface of the printed circuit board (PCB). Most of the defects in the SMT process are mainly related to the quality of solder paste printing. These defects lead to considerable manufacturing costs in the electronics assembly industry. Therefore, the solder paste inspection (SPI) machine for controlling and monitoring the amount of solder paste printing has become an important part of the production process. So far, the setting of the SPI threshold is based on statistical analysis and experts’ experiences to determine the appropriate threshold settings. Because the production data are not normal distribution and there are various variations in the production processes, defects related to solder paste printing still occur. In order to solve this problem, this paper proposes an online machine learning algorithm, called the automatic threshold adjustment (ATA) algorithm, and closed-loop architecture in the SMT process to determine the best threshold settings. Simulation experiments prove that our proposed threshold settings improve the accuracy from 99.85% to 100%.

Keywords: big data analytics, Industry 4.0, SPI threshold setting, surface mount technology

Procedia PDF Downloads 119
5430 Just Not Seeing It: Exploring the Relationship between Inattention Blindness and Banner Blindness

Authors: Carie Cunningham, Krsiten Lynch

Abstract:

Despite a viewer’s thought that they may be paying attention, many times they are missing out on their surrounds-- a phenomenon referred to as inattentional blindness. Inattention blindness refers to the failure of an individual to orient their attention to a particular item in their visual field. This well-defined in the psychology literature. Similarly, this phenomenon has been evaluated in media types in advertising. In advertising, not comprehending/remembering items in one’s field of vision is known as banner blindness. On the other hand, banner blindness is a phenomenon that occurs when individuals habitually see a banner in a specific area on a webpage, and thus condition themselves to ignore those habitual areas. Another reason that individuals avoid these habitual areas (usually on the top or sides of a webpage) is due to the lack of personal relevance or pertinent information to the viewer. Banner blindness, while a web-based concept, may also relate this inattention blindness. This paper is proposing an analysis of the true similarities and differences between these concepts bridging the two dimensions of thinking together. Forty participants participated in an eye-tracking and post-survey experiment to test attention and memory measures in both a banner blindness and inattention blindness condition. The two conditions were conducted between subjects semi-randomized order. Half of participants were told to search through the content ignoring the advertising banners; the other half of participants were first told to search through the content ignoring the distractor icon. These groups were switched after 5 trials and then 5 more trials were completed. In review of the literature, sustainability communication was found to have many inconsistencies with message production and viewer awareness. For the purpose of this study, we used advertising materials as stimuli. Results suggest that there are gaps between the two concepts and that more research should be done testing these effects in a real world setting versus an online environment. This contributes to theory by exploring the overlapping concepts—inattention blindness and banner blindness and providing the advertising industry with support that viewers can still fall victim to ignoring items in their field of view even if not consciously, which will impact message development.

Keywords: attention, banner blindness, eye movement, inattention blindness

Procedia PDF Downloads 277
5429 Imaginations of the Silk Road in Sven Hedin’s Travel Writings: 1900-1936

Authors: Kexin Tan

Abstract:

The Silk Road is a concept idiosyncratic in nature. Western scholars co-created and conceptualized in its early days, transliterated into the countries along the Silk Road, redefined, reimagined, and reconfigured by the public in the second half of the twentieth century. Therefore, the image is not only a mirror of the discursive interactions between East and West but Self and Other. The travel narrative of Sven Hedin, through which the Silk Road was enriched in meanings and popularized, is the focus of this study. This article examines how the Silk Road was imagined in three key texts of Sven Hedin: The Silk Road, The Wandering Lake, and The Flight of “Big Horse”. Three recurring themes are extracted and analyzed: the Silk Road, the land of enigmas, the virgin land, and the reconnecting road. Ideas about ethnotypes and images drawn from theorists such as Joep Leerssen have been deployed in the analysis. This research tracks how the images were configured, concentrating on China’s ethnotypes, travel writing tropes, and the Silk Road discourse that preceded Sven Hedin. Hedin’s role in his expedition, his geopolitical viewpoints, and the commercial considerations of his books are also discussed in relation to the intellectual construct of the Silk Road. It is discovered that the images of the Silk Road and the discursive traditions behind it are mobile rather than static, inclusive than antithetical. The paradoxical characters of the Silk Road reveal the complexity of the socio-historical background of Hedin’s time, as well as the collision of discursive traditions and practical issues. While it is true that Hedin’s discursive construction of the Silk Road image embodies the bias of Self-West against Other-East, its characteristics such as fluidity and openness could probably offer a hint at its resurgence in the postcolonial era.

Keywords: the silk road, Sven Hedin, imagology, ethnotype, travelogue

Procedia PDF Downloads 198
5428 A Hybrid Data Mining Algorithm Based System for Intelligent Defence Mission Readiness and Maintenance Scheduling

Authors: Shivam Dwivedi, Sumit Prakash Gupta, Durga Toshniwal

Abstract:

It is a challenging task in today’s date to keep defence forces in the highest state of combat readiness with budgetary constraints. A huge amount of time and money is squandered in the unnecessary and expensive traditional maintenance activities. To overcome this limitation Defence Intelligent Mission Readiness and Maintenance Scheduling System has been proposed, which ameliorates the maintenance system by diagnosing the condition and predicting the maintenance requirements. Based on new data mining algorithms, this system intelligently optimises mission readiness for imminent operations and maintenance scheduling in repair echelons. With modified data mining algorithms such as Weighted Feature Ranking Genetic Algorithm and SVM-Random Forest Linear ensemble, it improves the reliability, availability and safety, alongside reducing maintenance cost and Equipment Out of Action (EOA) time. The results clearly conclude that the introduced algorithms have an edge over the conventional data mining algorithms. The system utilizing the intelligent condition-based maintenance approach improves the operational and maintenance decision strategy of the defence force.

Keywords: condition based maintenance, data mining, defence maintenance, ensemble, genetic algorithms, maintenance scheduling, mission capability

Procedia PDF Downloads 301
5427 The Stereotypical Images of Marginalized Women in the Poetry of Rita Dove

Authors: Wafaa Kamal Isaac

Abstract:

This paper attempts to shed light upon the stereotypical images of marginalized black women as shown through the poetry of Rita Dove. Meanwhile, it explores how stereotypical images held by the society and public perceptions perpetuate the marginalization of black women. Dove is considered one of the most fundamental African-American poets who devoted her writings to explore the problem of identity that confronted marginalized women in America. Besides tackling the issue of black women’s stereotypical images, this paper focuses upon the psychological damage which the black women had suffered from due to their stripped identity. In ‘Thomas and Beulah’, Dove reflects the black woman’s longing for her homeland in order to make up for her lost identity. This poem represents atavistic feelings deal with certain recurrent images, both aural and visual, like the image of Beulah who represents the African-American woman who searches for an identity, as she is being denied and humiliated one in the newly founded society. In an attempt to protest against the stereotypical mule image that had been imposed upon black women in America, Dove in ‘On the Bus with Rosa Parks’ tries to ignite the beaten spirits to struggle for their own rights by revitalizing the rebellious nature and strong determination of the historical figure ‘Rosa Parks’ that sparked the Civil Rights Movement. In ‘Daystar’, Dove proves that black women are subjected to double-edged oppression; firstly, in terms of race as a black woman in an unjust white society that violates her rights due to her black origins and secondly, in terms of gender as a member of the female sex that is meant to exist only to serve man’s needs. Similarly, in the ‘Adolescence’ series, Dove focuses on the double marginalization which the black women had experienced. It concludes that the marginalization of black women has resulted from the domination of the masculine world and the oppression of the white world. Moreover, Dove’s ‘Beauty and the Beast’ investigates the African-American women’s problem of estrangement and identity crisis in America. It also sheds light upon the psychological consequences that resulted from the violation of marginalized women’s identity. Furthermore, this poem shows the black women’s self-debasement, helplessness, and double consciousness that emanate from the sense of uprootedness. Finally, this paper finds out that the negative, debased and inferior stereotypical image held by the society did not only contribute to the marginalization of black women but also silenced and muted their voices.

Keywords: stereotypical images, marginalized women, Rita Dove, identity

Procedia PDF Downloads 168
5426 A Stochastic Vehicle Routing Problem with Ordered Customers and Collection of Two Similar Products

Authors: Epaminondas G. Kyriakidis, Theodosis D. Dimitrakos, Constantinos C. Karamatsoukis

Abstract:

The vehicle routing problem (VRP) is a well-known problem in Operations Research and has been widely studied during the last fifty-five years. The context of the VRP is that of delivering or collecting products to or from customers who are scattered in a geographical area and have placed orders for these products. A vehicle or a fleet of vehicles start their routes from a depot and visit the customers in order to satisfy their demands. Special attention has been given to the capacitated VRP in which the vehicles have limited carrying capacity for the goods that are delivered or collected. In the present work, we present a specific capacitated stochastic vehicle routing problem which has many realistic applications. We develop and analyze a mathematical model for a specific vehicle routing problem in which a vehicle starts its route from a depot and visits N customers according to a particular sequence in order to collect from them two similar but not identical products. We name these products, product 1 and product 2. Each customer possesses items either of product 1 or product 2 with known probabilities. The number of the items of product 1 or product 2 that each customer possesses is a discrete random variable with known distribution. The actual quantity and the actual type of product that each customer possesses are revealed only when the vehicle arrives at the customer’s site. It is assumed that the vehicle has two compartments. We name these compartments, compartment 1 and compartment 2. It is assumed that compartment 1 is suitable for loading product 1 and compartment 2 is suitable for loading product 2. However, it is permitted to load items of product 1 into compartment 2 and items of product 2 into compartment 1. These actions cause costs that are due to extra labor. The vehicle is allowed during its route to return to the depot to unload the items of both products. The travel costs between consecutive customers and the travel costs between the customers and the depot are known. The objective is to find the optimal routing strategy, i.e. the routing strategy that minimizes the total expected cost among all possible strategies for servicing all customers. It is possible to develop a suitable dynamic programming algorithm for the determination of the optimal routing strategy. It is also possible to prove that the optimal routing strategy has a specific threshold-type strategy. Specifically, it is shown that for each customer the optimal actions are characterized by some critical integers. This structural result enables us to design a special-purpose dynamic programming algorithm that operates only over these strategies having this structural property. Extensive numerical results provide strong evidence that the special-purpose dynamic programming algorithm is considerably more efficient than the initial dynamic programming algorithm. Furthermore, if we consider the same problem without the assumption that the customers are ordered, numerical experiments indicate that the optimal routing strategy can be computed if N is smaller or equal to eight.

Keywords: dynamic programming, similar products, stochastic demands, stochastic preferences, vehicle routing problem

Procedia PDF Downloads 260
5425 Evaluation of a Data Fusion Algorithm for Detecting and Locating a Radioactive Source through Monte Carlo N-Particle Code Simulation and Experimental Measurement

Authors: Hadi Ardiny, Amir Mohammad Beigzadeh

Abstract:

Through the utilization of a combination of various sensors and data fusion methods, the detection of potential nuclear threats can be significantly enhanced by extracting more information from different data. In this research, an experimental and modeling approach was employed to track a radioactive source by combining a surveillance camera and a radiation detector (NaI). To run this experiment, three mobile robots were utilized, with one of them equipped with a radioactive source. An algorithm was developed in identifying the contaminated robot through correlation between camera images and camera data. The computer vision method extracts the movements of all robots in the XY plane coordinate system, and the detector system records the gamma-ray count. The position of the robots and the corresponding count of the moving source were modeled using the MCNPX simulation code while considering the experimental geometry. The results demonstrated a high level of accuracy in finding and locating the target in both the simulation model and experimental measurement. The modeling techniques prove to be valuable in designing different scenarios and intelligent systems before initiating any experiments.

Keywords: nuclear threats, radiation detector, MCNPX simulation, modeling techniques, intelligent systems

Procedia PDF Downloads 129
5424 Effect of Waste Bottle Chips on Strength Parameters of Silty Soil

Authors: Seyed Abolhasan Naeini, Hamidreza Rahmani

Abstract:

Laboratory consolidated undrained triaxial (CU) tests were carried out to study the strength behavior of silty soil reinforced with randomly plastic waste bottle chips. Specimens mixed with plastic waste chips in triaxial compression tests with 0.25, 0.50, 0.75, 1.0, and 1.25% by dry weight of soil and tree different length including 4, 8, and 12 mm. In all of the samples, the width and thickness of plastic chips were kept constant. According to the results, the amount and size of plastic waste bottle chips played an important role in the increasing of the strength parameters of reinforced silt compared to the pure soil. Because of good results, the suggested method of soil improvement can be used in many engineering problems such as increasing the bearing capacity and settlement reduction in foundations.

Keywords: reinforcement, silt, soil improvement, triaxial test, waste bottle chips

Procedia PDF Downloads 289
5423 Delineating Floodplain along the Nasia River in Northern Ghana Using HAND Contour

Authors: Benjamin K. Ghansah, Richard K. Appoh, Iliya Nababa, Eric K. Forkuo

Abstract:

The Nasia River is an important source of water for domestic and agricultural purposes to the inhabitants of its catchment. Major farming activities takes place within the floodplain of the river and its network of tributaries. The actual inundation extent of the river system is; however, unknown. Reasons for this lack of information include financial constraints and inadequate human resources as flood modelling is becoming increasingly complex by the day. Knowledge of the inundation extent will help in the assessment of risk posed by the annual flooding of the river, and help in the planning of flood recession agricultural activities. This study used a simple terrain based algorithm, Height Above Nearest Drainage (HAND), to delineate the floodplain of the Nasia River and its tributaries. The HAND model is a drainage normalized digital elevation model, which has its height reference based on the local drainage systems rather than the average mean sea level (AMSL). The underlying principle guiding the development of the HAND model is that hillslope flow paths behave differently when the reference gradient is to the local drainage network as compared to the seaward gradient. The new terrain model of the catchment was created using the NASA’s SRTM Digital Elevation Model (DEM) 30m as the only data input. Contours (HAND Contour) were then generated from the normalized DEM. Based on field flood inundation survey, historical information of flooding of the area as well as satellite images, a HAND Contour of 2m was found to best correlates with the flood inundation extent of the river and its tributaries. A percentage accuracy of 75% was obtained when the surface area created by the 2m contour was compared with surface area of the floodplain computed from a satellite image captured during the peak flooding season in September 2016. It was estimated that the flooding of the Nasia River and its tributaries created a floodplain area of 1011 km².

Keywords: digital elevation model, floodplain, HAND contour, inundation extent, Nasia River

Procedia PDF Downloads 459
5422 Adaptive Power Control of the City Bus Integrated Photovoltaic System

Authors: Piotr Kacejko, Mariusz Duk, Miroslaw Wendeker

Abstract:

This paper presents an adaptive controller to track the maximum power point of a photovoltaic modules (PV) under fast irradiation change on the city-bus roof. Photovoltaic systems have been a prominent option as an additional energy source for vehicles. The Municipal Transport Company (MPK) in Lublin has installed photovoltaic panels on its buses roofs. The solar panels turn solar energy into electric energy and are used to load the buses electric equipment. This decreases the buses alternators load, leading to lower fuel consumption and bringing both economic and ecological profits. A DC–DC boost converter is selected as the power conditioning unit to coordinate the operating point of the system. In addition to the conversion efficiency of a photovoltaic panel, the maximum power point tracking (MPPT) method also plays a main role to harvest most energy out of the sun. The MPPT unit on a moving vehicle must keep tracking accuracy high in order to compensate rapid change of irradiation change due to dynamic motion of the vehicle. Maximum power point track controllers should be used to increase efficiency and power output of solar panels under changing environmental factors. There are several different control algorithms in the literature developed for maximum power point tracking. However, energy performances of MPPT algorithms are not clarified for vehicle applications that cause rapid changes of environmental factors. In this study, an adaptive MPPT algorithm is examined at real ambient conditions. PV modules are mounted on a moving city bus designed to test the solar systems on a moving vehicle. Some problems of a PV system associated with a moving vehicle are addressed. The proposed algorithm uses a scanning technique to determine the maximum power delivering capacity of the panel at a given operating condition and controls the PV panel. The aim of control algorithm was matching the impedance of the PV modules by controlling the duty cycle of the internal switch, regardless of changes of the parameters of the object of control and its outer environment. Presented algorithm was capable of reaching the aim of control. The structure of an adaptive controller was simplified on purpose. Since such a simple controller, armed only with an ability to learn, a more complex structure of an algorithm can only improve the result. The presented adaptive control system of the PV system is a general solution and can be used for other types of PV systems of both high and low power. Experimental results obtained from comparison of algorithms by a motion loop are presented and discussed. Experimental results are presented for fast change in irradiation and partial shading conditions. The results obtained clearly show that the proposed method is simple to implement with minimum tracking time and high tracking efficiency proving superior to the proposed method. This work has been financed by the Polish National Centre for Research and Development, PBS, under Grant Agreement No. PBS 2/A6/16/2013.

Keywords: adaptive control, photovoltaic energy, city bus electric load, DC-DC converter

Procedia PDF Downloads 216
5421 Prediction of Physical Properties and Sound Absorption Performance of Automotive Interior Materials

Authors: Un-Hwan Park, Jun-Hyeok Heo, In-Sung Lee, Seong-Jin Cho, Tae-Hyeon Oh, Dae-Kyu Park

Abstract:

Sound absorption coefficient is considered important when designing because noise affects emotion quality of car. It is designed with lots of experiment tunings in the field because it is unreliable to predict it for multi-layer material. In this paper, we present the design of sound absorption for automotive interior material with multiple layers using estimation software of sound absorption coefficient for reverberation chamber. Additionally, we introduce the method for estimation of physical properties required to predict sound absorption coefficient of car interior materials with multiple layers too. It is calculated by inverse algorithm. It is very economical to get information about physical properties without expensive equipment. Correlation test is carried out to ensure reliability for accuracy. The data to be used for the correlation is sound absorption coefficient measured in the reverberation chamber. In this way, it is considered economical and efficient to design automotive interior materials. And design optimization for sound absorption coefficient is also easy to implement when it is designed.

Keywords: sound absorption coefficient, optimization design, inverse algorithm, automotive interior material, multiple layers nonwoven, scaled reverberation chamber, sound impedance tubes

Procedia PDF Downloads 311
5420 Stray Light Reduction Methodology by a Sinusoidal Light Modulation and Three-Parameter Sine Curve Fitting Algorithm for a Reflectance Spectrometer

Authors: Hung Chih Hsieh, Cheng Hao Chang, Yun Hsiang Chang, Yu Lin Chang

Abstract:

In the applications of the spectrometer, the stray light that comes from the environment affects the measurement results a lot. Hence, environment and instrument quality control for the stray reduction is critical for the spectral reflectance measurement. In this paper, a simple and practical method has been developed to correct a spectrometer's response for measurement errors arising from the environment's and instrument's stray light. A sinusoidal modulated light intensity signal was incident on a tested sample, and then the reflected light was collected by the spectrometer. Since a sinusoidal signal modulated the incident light, the reflected light also had a modulated frequency which was the same as the incident signal. Using the three-parameter sine curve fitting algorithm, we can extract the primary reflectance signal from the total measured signal, which contained the primary reflectance signal and the stray light from the environment. The spectra similarity between the extracted spectra by this proposed method with extreme environment stray light is 99.98% similar to the spectra without the environment's stray light. This result shows that we can measure the reflectance spectra without the affection of the environment's stray light.

Keywords: spectrometer, stray light, three-parameter sine curve fitting, spectra extraction

Procedia PDF Downloads 255
5419 Key Transfer Protocol Based on Non-invertible Numbers

Authors: Luis A. Lizama-Perez, Manuel J. Linares, Mauricio Lopez

Abstract:

We introduce a method to perform remote user authentication on what we call non-invertible cryptography. It exploits the fact that the multiplication of an invertible integer and a non-invertible integer in a ring Zn produces a non-invertible integer making infeasible to compute factorization. The protocol requires the smallest key size when is compared with the main public key algorithms as Diffie-Hellman, Rivest-Shamir-Adleman or Elliptic Curve Cryptography. Since we found that the unique opportunity for the eavesdropper is to mount an exhaustive search on the keys, the protocol seems to be post-quantum.

Keywords: invertible, non-invertible, ring, key transfer

Procedia PDF Downloads 183
5418 Continuous Differential Evolution Based Parameter Estimation Framework for Signal Models

Authors: Ammara Mehmood, Aneela Zameer, Muhammad Asif Zahoor Raja, Muhammad Faisal Fateh

Abstract:

In this work, the strength of bio-inspired computational intelligence based technique is exploited for parameter estimation for the periodic signals using Continuous Differential Evolution (CDE) by defining an error function in the mean square sense. Multidimensional and nonlinear nature of the problem emerging in sinusoidal signal models along with noise makes it a challenging optimization task, which is dealt with robustness and effectiveness of CDE to ensure convergence and avoid trapping in local minima. In the proposed scheme of Continuous Differential Evolution based Signal Parameter Estimation (CDESPE), unknown adjustable weights of the signal system identification model are optimized utilizing CDE algorithm. The performance of CDESPE model is validated through statistics based various performance indices on a sufficiently large number of runs in terms of estimation error, mean squared error and Thiel’s inequality coefficient. Efficacy of CDESPE is examined by comparison with the actual parameters of the system, Genetic Algorithm based outcomes and from various deterministic approaches at different signal-to-noise ratio (SNR) levels.

Keywords: parameter estimation, bio-inspired computing, continuous differential evolution (CDE), periodic signals

Procedia PDF Downloads 304
5417 Measuring Corporate Brand Loyalties in Business Markets: A Case for Caution

Authors: Niklas Bondesson

Abstract:

Purpose: This paper attempts to examine how different facets of attitudinal brand loyalty are determined by different brand image elements in business markets. Design/Methodology/Approach: Statistical analysis is employed to data from a web survey, covering 226 professional packaging buyers in eight countries. Findings: The results reveal that different brand loyalty facets have different antecedents. Affective brand loyalties (or loyalty 'feelings') are mainly driven by customer associations to service relationships, whereas customers’ loyalty intentions (to purchase and recommend a brand) are triggered by associations to the general reputation of the company. The findings also indicate that willingness to pay a price premium is a distinct form of loyalty, with unique determinants. Research implications: Theoretically, the paper suggests that corporate B2B brand loyalty needs to be conceptualised with more refinement than has been done in extant B2B branding work. Methodologically, the paper highlights that single-item approaches can be fruitful when measuring B2B brand loyalty, and that multi-item scales can conceal important nuances in terms of understanding why customers are loyal. Practical implications: The idea of a loyalty 'silver metric' is an attractive idea, but this study indicates that firms who rely too much on one single type of brand loyalty risk to miss important building blocks. Originality/Value/Contribution: The major contribution is a more multi-faceted conceptualisation, and measurement, of corporate B2B brand loyalty and its brand image determinants than extant work has provided.

Keywords: brand equity, business-to-business branding, industrial marketing, buying behaviour

Procedia PDF Downloads 416
5416 Multi-Agent Searching Adaptation Using Levy Flight and Inferential Reasoning

Authors: Sagir M. Yusuf, Chris Baber

Abstract:

In this paper, we describe how to achieve knowledge understanding and prediction (Situation Awareness (SA)) for multiple-agents conducting searching activity using Bayesian inferential reasoning and learning. Bayesian Belief Network was used to monitor agents' knowledge about their environment, and cases are recorded for the network training using expectation-maximisation or gradient descent algorithm. The well trained network will be used for decision making and environmental situation prediction. Forest fire searching by multiple UAVs was the use case. UAVs are tasked to explore a forest and find a fire for urgent actions by the fire wardens. The paper focused on two problems: (i) effective agents’ path planning strategy and (ii) knowledge understanding and prediction (SA). The path planning problem by inspiring animal mode of foraging using Lévy distribution augmented with Bayesian reasoning was fully described in this paper. Results proof that the Lévy flight strategy performs better than the previous fixed-pattern (e.g., parallel sweeps) approaches in terms of energy and time utilisation. We also introduced a waypoint assessment strategy called k-previous waypoints assessment. It improves the performance of the ordinary levy flight by saving agent’s resources and mission time through redundant search avoidance. The agents (UAVs) are to report their mission knowledge at the central server for interpretation and prediction purposes. Bayesian reasoning and learning were used for the SA and results proof effectiveness in different environments scenario in terms of prediction and effective knowledge representation. The prediction accuracy was measured using learning error rate, logarithm loss, and Brier score and the result proves that little agents mission that can be used for prediction within the same or different environment. Finally, we described a situation-based knowledge visualization and prediction technique for heterogeneous multi-UAV mission. While this paper proves linkage of Bayesian reasoning and learning with SA and effective searching strategy, future works is focusing on simplifying the architecture.

Keywords: Levy flight, distributed constraint optimization problem, multi-agent system, multi-robot coordination, autonomous system, swarm intelligence

Procedia PDF Downloads 147
5415 A Process of Forming a Single Competitive Factor in the Digital Camera Industry

Authors: Kiyohiro Yamazaki

Abstract:

This paper considers a forming process of a single competitive factor in the digital camera industry from the viewpoint of product platform. To make product development easier for companies and to increase product introduction ratios, development efforts concentrate on improving and strengthening certain product attributes, and it is born in the process that the product platform is formed continuously. It is pointed out that the formation of this product platform raises product development efficiency of individual companies, but on the other hand, it has a trade-off relationship of causing unification of competitive factors in the whole industry. This research tries to analyze product specification data which were collected from the web page of digital camera companies. Specifically, this research collected all product specification data released in Japan from 1995 to 2003 and analyzed the composition of image sensor and optical lens; and it identified product platforms shared by multiple products and discussed their application. As a result, this research found that the product platformation was born in the development of the standard product for major market segmentation. Every major company has made product platforms of image sensors and optical lenses, and as a result, this research found that the competitive factors were unified in the entire industry throughout product platformation. In other words, this product platformation brought product development efficiency of individual firms; however, it also caused industrial competition factors to be unified in the industry.

Keywords: digital camera industry, product evolution trajectory, product platform, unification of competitive factors

Procedia PDF Downloads 161
5414 The Data-Driven Localized Wave Solution of the Fokas-Lenells Equation using PINN

Authors: Gautam Kumar Saharia, Sagardeep Talukdar, Riki Dutta, Sudipta Nandy

Abstract:

The physics informed neural network (PINN) method opens up an approach for numerically solving nonlinear partial differential equations leveraging fast calculating speed and high precession of modern computing systems. We construct the PINN based on strong universal approximation theorem and apply the initial-boundary value data and residual collocation points to weekly impose initial and boundary condition to the neural network and choose the optimization algorithms adaptive moment estimation (ADAM) and Limited-memory Broyden-Fletcher-Golfard-Shanno (L-BFGS) algorithm to optimize learnable parameter of the neural network. Next, we improve the PINN with a weighted loss function to obtain both the bright and dark soliton solutions of Fokas-Lenells equation (FLE). We find the proposed scheme of adjustable weight coefficients into PINN has a better convergence rate and generalizability than the basic PINN algorithm. We believe that the PINN approach to solve the partial differential equation appearing in nonlinear optics would be useful to study various optical phenomena.

Keywords: deep learning, optical Soliton, neural network, partial differential equation

Procedia PDF Downloads 133
5413 Adaptive Swarm Balancing Algorithms for Rare-Event Prediction in Imbalanced Healthcare Data

Authors: Jinyan Li, Simon Fong, Raymond Wong, Mohammed Sabah, Fiaidhi Jinan

Abstract:

Clinical data analysis and forecasting have make great contributions to disease control, prevention and detection. However, such data usually suffer from highly unbalanced samples in class distributions. In this paper, we target at the binary imbalanced dataset, where the positive samples take up only the minority. We investigate two different meta-heuristic algorithms, particle swarm optimization and bat-inspired algorithm, and combine both of them with the synthetic minority over-sampling technique (SMOTE) for processing the datasets. One approach is to process the full dataset as a whole. The other is to split up the dataset and adaptively process it one segment at a time. The experimental results reveal that while the performance improvements obtained by the former methods are not scalable to larger data scales, the later one, which we call Adaptive Swarm Balancing Algorithms, leads to significant efficiency and effectiveness improvements on large datasets. We also find it more consistent with the practice of the typical large imbalanced medical datasets. We further use the meta-heuristic algorithms to optimize two key parameters of SMOTE. Leading to more credible performances of the classifier, and shortening the running time compared with the brute-force method.

Keywords: Imbalanced dataset, meta-heuristic algorithm, SMOTE, big data

Procedia PDF Downloads 446
5412 An Insite to the Probabilistic Assessment of Reserves in Conventional Reservoirs

Authors: Sai Sudarshan, Harsh Vyas, Riddhiman Sherlekar

Abstract:

The oil and gas industry has been unwilling to adopt stochastic definition of reserves. Nevertheless, Monte Carlo simulation methods have gained acceptance by engineers, geoscientists and other professionals who want to evaluate prospects or otherwise analyze problems that involve uncertainty. One of the common applications of Monte Carlo simulation is the estimation of recoverable hydrocarbon from a reservoir.Monte Carlo Simulation makes use of random samples of parameters or inputs to explore the behavior of a complex system or process. It finds application whenever one needs to make an estimate, forecast or decision where there is significant uncertainty. First, the project focuses on performing Monte-Carlo Simulation on a given data set using U. S Department of Energy’s MonteCarlo Software, which is a freeware e&p tool. Further, an algorithm for simulation has been developed for MATLAB and program performs simulation by prompting user for input distributions and parameters associated with each distribution (i.e. mean, st.dev, min., max., most likely, etc.). It also prompts user for desired probability for which reserves are to be calculated. The algorithm so developed and tested in MATLAB further finds implementation in Python where existing libraries on statistics and graph plotting have been imported to generate better outcome. With PyQt designer, codes for a simple graphical user interface have also been written. The graph so plotted is then validated with already available results from U.S DOE MonteCarlo Software.

Keywords: simulation, probability, confidence interval, sensitivity analysis

Procedia PDF Downloads 387
5411 Designing and Implementing a Tourist-Guide Web Service Based on Volunteer Geographic Information Using Open-Source Technologies

Authors: Javad Sadidi, Ehsan Babaei, Hani Rezayan

Abstract:

The advent of web 2.0 gives a possibility to scale down the costs of data collection and mapping, specifically if the process is done by volunteers. Every volunteer can be thought of as a free and ubiquitous sensor to collect spatial, descriptive as well as multimedia data for tourist services. The lack of large-scale information, such as real-time climate and weather conditions, population density, and other related data, can be considered one of the important challenges in developing countries for tourists to make the best decision in terms of time and place of travel. The current research aims to design and implement a spatiotemporal web map service using volunteer-submitted data. The service acts as a tourist-guide service in which tourists can search interested places based on their requested time for travel. To design the service, three tiers of architecture, including data, logical processing, and presentation tiers, have been utilized. For implementing the service, open-source software programs, client and server-side programming languages (such as OpenLayers2, AJAX, and PHP), Geoserver as a map server, and Web Feature Service (WFS) standards have been used. The result is two distinct browser-based services, one for sending spatial, descriptive, and multimedia volunteer data and another one for tourists and local officials. Local official confirms the veracity of the volunteer-submitted information. In the tourist interface, a spatiotemporal search engine has been designed to enable tourists to find a tourist place based on province, city, and location at a specific time of interest. Implementing the tourist-guide service by this methodology causes the following: the current tourists participate in a free data collection and sharing process for future tourists, a real-time data sharing and accessing for all, avoiding a blind selection of travel destination and significantly, decreases the cost of providing such services.

Keywords: VGI, tourism, spatiotemporal, browser-based, web mapping

Procedia PDF Downloads 102
5410 Application of Deep Learning in Top Pair and Single Top Quark Production at the Large Hadron Collider

Authors: Ijaz Ahmed, Anwar Zada, Muhammad Waqas, M. U. Ashraf

Abstract:

We demonstrate the performance of a very efficient tagger applies on hadronically decaying top quark pairs as signal based on deep neural network algorithms and compares with the QCD multi-jet background events. A significant enhancement of performance in boosted top quark events is observed with our limited computing resources. We also compare modern machine learning approaches and perform a multivariate analysis of boosted top-pair as well as single top quark production through weak interaction at √s = 14 TeV proton-proton Collider. The most relevant known background processes are incorporated. Through the techniques of Boosted Decision Tree (BDT), likelihood and Multlayer Perceptron (MLP) the analysis is trained to observe the performance in comparison with the conventional cut based and count approach

Keywords: top tagger, multivariate, deep learning, LHC, single top

Procedia PDF Downloads 114
5409 Automatic Generating CNC-Code for Milling Machine

Authors: Chalakorn Chitsaart, Suchada Rianmora, Mann Rattana-Areeyagon, Wutichai Namjaiprasert

Abstract:

G-code is the main factor in computer numerical control (CNC) machine for controlling the tool-paths and generating the profile of the object’s features. For obtaining high surface accuracy of the surface finish, non-stop operation is required for CNC machine. Recently, to design a new product, the strategy that concerns about a change that has low impact on business and does not consume lot of resources has been introduced. Cost and time for designing minor changes can be reduced since the traditional geometric details of the existing models are applied. In order to support this strategy as the alternative channel for machining operation, this research proposes the automatic generating codes for CNC milling operation. Using this technique can assist the manufacturer to easily change the size and the geometric shape of the product during the operation where the time spent for setting up or processing the machine are reduced. The algorithm implemented on MATLAB platform is developed by analyzing and evaluating the geometric information of the part. Codes are created rapidly to control the operations of the machine. Comparing to the codes obtained from CAM, this developed algorithm can shortly generate and simulate the cutting profile of the part.

Keywords: geometric shapes, milling operation, minor changes, CNC Machine, G-code, cutting parameters

Procedia PDF Downloads 354