Search results for: Large Eddy Simulation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5364

Search results for: Large Eddy Simulation

294 Exploring Influence Range of Tainan City Using Electronic Toll Collection Big Data

Authors: Chen Chou, Feng-Tyan Lin

Abstract:

Big Data has been attracted a lot of attentions in many fields for analyzing research issues based on a large number of maternal data. Electronic Toll Collection (ETC) is one of Intelligent Transportation System (ITS) applications in Taiwan, used to record starting point, end point, distance and travel time of vehicle on the national freeway. This study, taking advantage of ETC big data, combined with urban planning theory, attempts to explore various phenomena of inter-city transportation activities. ETC, one of government's open data, is numerous, complete and quick-update. One may recall that living area has been delimited with location, population, area and subjective consciousness. However, these factors cannot appropriately reflect what people’s movement path is in daily life. In this study, the concept of "Living Area" is replaced by "Influence Range" to show dynamic and variation with time and purposes of activities. This study uses data mining with Python and Excel, and visualizes the number of trips with GIS to explore influence range of Tainan city and the purpose of trips, and discuss living area delimited in current. It dialogues between the concepts of "Central Place Theory" and "Living Area", presents the new point of view, integrates the application of big data, urban planning and transportation. The finding will be valuable for resource allocation and land apportionment of spatial planning.

Keywords: Big Data, ITS, influence range, living area, central place theory, visualization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 931
293 The Use of the Limit Cycles of Dynamic Systems for Formation of Program Trajectories of Points Feet of the Anthropomorphous Robot

Authors: A. S. Gorobtsov, A. S. Polyanina, A. E. Andreev

Abstract:

The movement of points feet of the anthropomorphous robot in space occurs along some stable trajectory of a known form. A large number of modifications to the methods of control of biped robots indicate the fundamental complexity of the problem of stability of the program trajectory and, consequently, the stability of the control for the deviation for this trajectory. Existing gait generators use piecewise interpolation of program trajectories. This leads to jumps in the acceleration at the boundaries of sites. Another interpolation can be realized using differential equations with fractional derivatives. In work, the approach to synthesis of generators of program trajectories is considered. The resulting system of nonlinear differential equations describes a smooth trajectory of movement having rectilinear sites. The method is based on the theory of an asymptotic stability of invariant sets. The stability of such systems in the area of localization of oscillatory processes is investigated. The boundary of the area is a bounded closed surface. In the corresponding subspaces of the oscillatory circuits, the resulting stable limit cycles are curves having rectilinear sites. The solution of the problem is carried out by means of synthesis of a set of the continuous smooth controls with feedback. The necessary geometry of closed trajectories of movement is obtained due to the introduction of high-order nonlinearities in the control of stabilization systems. The offered method was used for the generation of trajectories of movement of point’s feet of the anthropomorphous robot. The synthesis of the robot's program movement was carried out by means of the inverse method.

Keywords: Control, limits cycle, robot, stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 726
292 A Spatial Hypergraph Based Semi-Supervised Band Selection Method for Hyperspectral Imagery Semantic Interpretation

Authors: Akrem Sellami, Imed Riadh Farah

Abstract:

Hyperspectral imagery (HSI) typically provides a wealth of information captured in a wide range of the electromagnetic spectrum for each pixel in the image. Hence, a pixel in HSI is a high-dimensional vector of intensities with a large spectral range and a high spectral resolution. Therefore, the semantic interpretation is a challenging task of HSI analysis. We focused in this paper on object classification as HSI semantic interpretation. However, HSI classification still faces some issues, among which are the following: The spatial variability of spectral signatures, the high number of spectral bands, and the high cost of true sample labeling. Therefore, the high number of spectral bands and the low number of training samples pose the problem of the curse of dimensionality. In order to resolve this problem, we propose to introduce the process of dimensionality reduction trying to improve the classification of HSI. The presented approach is a semi-supervised band selection method based on spatial hypergraph embedding model to represent higher order relationships with different weights of the spatial neighbors corresponding to the centroid of pixel. This semi-supervised band selection has been developed to select useful bands for object classification. The presented approach is evaluated on AVIRIS and ROSIS HSIs and compared to other dimensionality reduction methods. The experimental results demonstrate the efficacy of our approach compared to many existing dimensionality reduction methods for HSI classification.

Keywords: Hyperspectral image, spatial hypergraph, dimensionality reduction, semantic interpretation, band selection, feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1173
291 Simplified Stress Gradient Method for Stress-Intensity Factor Determination

Authors: Jeries J. Abou-Hanna

Abstract:

Several techniques exist for determining stress-intensity factors in linear elastic fracture mechanics analysis. These techniques are based on analytical, numerical, and empirical approaches that have been well documented in literature and engineering handbooks. However, not all techniques share the same merit. In addition to overly-conservative results, the numerical methods that require extensive computational effort, and those requiring copious user parameters hinder practicing engineers from efficiently evaluating stress-intensity factors. This paper investigates the prospects of reducing the complexity and required variables to determine stress-intensity factors through the utilization of the stress gradient and a weighting function. The heart of this work resides in the understanding that fracture emanating from stress concentration locations cannot be explained by a single maximum stress value approach, but requires use of a critical volume in which the crack exists. In order to understand the effectiveness of this technique, this study investigated components of different notch geometry and varying levels of stress gradients. Two forms of weighting functions were employed to determine stress-intensity factors and results were compared to analytical exact methods. The results indicated that the “exponential” weighting function was superior to the “absolute” weighting function. An error band +/- 10% was met for cases ranging from a steep stress gradient in a sharp v-notch to the less severe stress transitions of a large circular notch. The incorporation of the proposed method has shown to be a worthwhile consideration.

Keywords: Fracture mechanics, finite element method, stress intensity factor, stress gradient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 698
290 Characterisation of Fractions Extracted from Sorghum Byproducts

Authors: Prima Luna, Afroditi Chatzifragkou, Dimitris Charalampopoulos

Abstract:

Sorghum byproducts, namely bran, stalk, and panicle are examples of lignocellulosic biomass. These raw materials contain large amounts of polysaccharides, in particular hemicelluloses, celluloses, and lignins, which if efficiently extracted, can be utilised for the development of a range of added value products with potential applications in agriculture and food packaging sectors. The aim of this study was to characterise fractions extracted from sorghum bran and stalk with regards to their physicochemical properties that could determine their applicability as food-packaging materials. A sequential alkaline extraction was applied for the isolation of cellulosic, hemicellulosic and lignin fractions from sorghum stalk and bran. Lignin content, phenolic content and antioxidant capacity were also investigated in the case of the lignin fraction. Thermal analysis using differential scanning calorimetry (DSC) and X-Ray Diffraction (XRD) revealed that the glass transition temperature (Tg) of cellulose fraction of the stalk was ~78.33 oC at amorphous state (~65%) and water content of ~5%. In terms of hemicellulose, the Tg value of stalk was slightly lower compared to bran at amorphous state (~54%) and had less water content (~2%). It is evident that hemicelluloses generally showed a lower thermal stability compared to cellulose, probably due to their lack of crystallinity. Additionally, bran had higher arabinose-to-xylose ratio (0.82) than the stalk, a fact that indicated its low crystallinity. Furthermore, lignin fraction had Tg value of ~93 oC at amorphous state (~11%). Stalk-derived lignin fraction contained more phenolic compounds (mainly consisting of p-coumaric and ferulic acid) and had higher lignin content and antioxidant capacity compared to bran-derived lignin fraction.

Keywords: Alkaline extraction, bran, cellulose, hemicellulose, lignin, sorghum, stalk.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1352
289 Simulation and Analysis of Passive Parameters of Building in eQuest: A Case Study in Istanbul, Turkey

Authors: Mahdiyeh Zafaranchi

Abstract:

With rapid development of urbanization and improvement of living standards in the world, energy consumption and carbon emissions of the building sector are expected to increase in the near future; because of that, energy-saving issues have become more important among the engineers. Besides, the building sector is a major contributor to energy consumption and carbon emissions. The concept of efficient building appeared as a response to the need for reducing energy demand in this sector which has the main purpose of shifting from standard buildings to low-energy buildings. Although energy-saving should happen in all steps of a building during the life cycle (material production, construction, demolition), the main concept of efficient energy building is saving energy during the life expectancy of a building by using passive and active systems, and should not sacrifice comfort and quality to reach these goals. The main aim of this study is to investigate passive strategies (do not need energy consumption or use renewable energy) to achieve energy-efficient buildings. Energy retrofit measures were explored by eQuest software using a case study as a base model. The study investigates predictive accuracy for the major factors like thermal transmittance (U-value) of the material, windows, shading devices, thermal insulation, rate of the exposed envelope, window/wall ration, lighting system in the energy consumption of the building. The base model was located in Istanbul, Turkey. The impact of eight passive parameters on energy consumption had been indicated. After analyzing the base model by eQuest, a final scenario was suggested which had a good energy performance. The results showed a decrease in the U-values of materials, the rate of exposing buildings, and windows had a significant effect on energy consumption. Finally, savings in electric consumption of about 10.5%, and gas consumption by about 8.37% in the suggested model were achieved annually.

Keywords: Efficient building, electric and gas consumption, eQuest, passive parameters.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 694
288 Analysis of Seismic Waves Generated by Blasting Operations and their Response on Buildings

Authors: S. Ziaran, M. Musil, M. Cekan, O. Chlebo

Abstract:

The paper analyzes the response of buildings and industrially structures on seismic waves (low frequency mechanical vibration) generated by blasting operations. The principles of seismic analysis can be applied for different kinds of excitation such as: earthquakes, wind, explosions, random excitation from local transportation, periodic excitation from large rotating and/or machines with reciprocating motion, metal forming processes such as forging, shearing and stamping, chemical reactions, construction and earth moving work, and other strong deterministic and random energy sources caused by human activities. The article deals with the response of seismic, low frequency, mechanical vibrations generated by nearby blasting operations on a residential home. The goal was to determine the fundamental natural frequencies of the measured structure; therefore it is important to determine the resonant frequencies to design a suitable modal damping. The article also analyzes the package of seismic waves generated by blasting (Primary waves – P-waves and Secondary waves S-waves) and investigated the transfer regions. For the detection of seismic waves resulting from an explosion, the Fast Fourier Transform (FFT) and modal analysis, in the frequency domain, is used and the signal was acquired and analyzed also in the time domain. In the conclusions the measured results of seismic waves caused by blasting in a nearby quarry and its effect on a nearby structure (house) is analyzed. The response on the house, including the fundamental natural frequency and possible fatigue damage is also assessed.

Keywords: Building structure, seismic waves, spectral analysis, structural response.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5236
287 Performance Analysis of New Types of Reference Targets Based on Spaceborne and Airborne SAR Data

Authors: Y. S. Zhou, C. R. Li, L. L. Tang, C. X. Gao, D. J. Wang, Y. Y. Guo

Abstract:

Triangular trihedral corner reflector (CR) has been widely used as point target for synthetic aperture radar (SAR) calibration and image quality assessment. The additional “tip” of the triangular plate does not contribute to the reflector’s theoretical RCS and if it interacts with a perfectly reflecting ground plane, it will yield an increase of RCS at the radar bore-sight and decrease the accuracy of SAR calibration and image quality assessment. Regarding this problem, two types of CRs were manufactured. One was the hexagonal trihedral CR. It is a self-illuminating CR with relatively small plate edge length, while large edge length usually introduces unexpected edge diffraction error. The other was the triangular trihedral CR with extended bottom plate which considers the effect of ‘tip’ into the total RCS. In order to assess the performance of the two types of new CRs, flight campaign over the National Calibration and Validation Site for High Resolution Remote Sensors was carried out. Six hexagonal trihedral CRs and two bottom-extended trihedral CRs, as well as several traditional triangular trihedral CRs, were deployed. KOMPSAT-5 X-band SAR image was acquired for the performance analysis of the hexagonal trihedral CRs. C-band airborne SAR images were acquired for the performance analysis of the bottom-extended trihedral CRs. The analysis results showed that the impulse response function of both the hexagonal trihedral CRs and bottom-extended trihedral CRs were much closer to the ideal sinc-function than the traditional triangular trihedral CRs. The flight campaign results validated the advantages of new types of CRs and they might be useful in the future SAR calibration mission.

Keywords: Synthetic Aperture Radar, calibration, corner reflector, KOMPSAT-5.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1181
286 Using Dynamic Glazing to Eliminate Mechanical Cooling in Multi-family Highrise Buildings

Authors: Ranojoy Dutta, Adam Barker

Abstract:

Multifamily residential buildings are increasingly being built with large glazed areas to provide tenants with greater daylight and outdoor views. However, traditional double-glazed window assemblies can lead to significant thermal discomfort from high radiant temperatures as well as increased cooling energy use to address solar gains. Dynamic glazing provides an effective solution by actively controlling solar transmission to maintain indoor thermal comfort, without compromising the visual connection to outdoors. This study uses thermal simulations across three Canadian cities (Toronto, Vancouver and Montreal) to verify if dynamic glazing along with operable windows and ceiling fans can maintain the indoor operative temperature of a prototype southwest facing high-rise apartment unit within the ASHRAE 55 adaptive comfort range for a majority of the year, without any mechanical cooling. Since this study proposes the use of natural ventilation for cooling and the typical building life cycle is 30-40 years, the typical weather files have been modified based on accepted global warming projections for increased air temperatures by 2050. Results for the prototype apartment confirm that thermal discomfort with dynamic glazing occurs only for less than 0.7% of the year. However, in the baseline scenario with low-E glass there are up to 7% annual hours of discomfort despite natural ventilation with operable windows and improved air movement with ceiling fans.

Keywords: Electrochromic, operable windows, thermal comfort, natural ventilation, adaptive comfort.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 504
285 A Hybrid Fuzzy AGC in a Competitive Electricity Environment

Authors: H. Shayeghi, A. Jalili

Abstract:

This paper presents a new Hybrid Fuzzy (HF) PID type controller based on Genetic Algorithms (GA-s) for solution of the Automatic generation Control (AGC) problem in a deregulated electricity environment. In order for a fuzzy rule based control system to perform well, the fuzzy sets must be carefully designed. A major problem plaguing the effective use of this method is the difficulty of accurately constructing the membership functions, because it is a computationally expensive combinatorial optimization problem. On the other hand, GAs is a technique that emulates biological evolutionary theories to solve complex optimization problems by using directed random searches to derive a set of optimal solutions. For this reason, the membership functions are tuned automatically using a modified GA-s based on the hill climbing method. The motivation for using the modified GA-s is to reduce fuzzy system effort and take large parametric uncertainties into account. The global optimum value is guaranteed using the proposed method and the speed of the algorithm-s convergence is extremely improved, too. This newly developed control strategy combines the advantage of GA-s and fuzzy system control techniques and leads to a flexible controller with simple stricture that is easy to implement. The proposed GA based HF (GAHF) controller is tested on a threearea deregulated power system under different operating conditions and contract variations. The results of the proposed GAHF controller are compared with those of Multi Stage Fuzzy (MSF) controller, robust mixed H2/H∞ and classical PID controllers through some performance indices to illustrate its robust performance for a wide range of system parameters and load changes.

Keywords: AGC, Hybrid Fuzzy Controller, Deregulated Power System, Power System Control, GAs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1686
284 LCA and Multi-Criteria Analysis of Fly Ash Concrete Pavements

Authors: M. Ondova, A. Estokova

Abstract:

Rapid industrialization results in increased use of natural resources bring along serious ecological and environmental imbalance due to the dumping of industrial wastes. Principles of sustainable construction have to be accepted with regard to the consumption of natural resources and the production of harmful emissions. Cement is a great importance raw material in the building industry and today is its large amount used in the construction of concrete pavements. Concerning raw materials cost and producing CO2 emission the replacing of cement in concrete mixtures with more sustainable materials is necessary. To reduce this environmental impact people all over the world are looking for a solution. Over a period of last ten years, the image of fly ash has completely been changed from a polluting waste to resource material and it can solve the major problems of cement use. Fly ash concretes are proposed as a potential approach for achieving substantial reductions in cement. It is known that it improves the workability of concrete, extends the life cycle of concrete roads, and reduces energy use and greenhouse gas as well as amount of coal combustion products that must be disposed in landfills.

Life cycle assessment also proved that a concrete pavement with fly ash cement replacement is considerably more environmentally friendly compared to standard concrete roads. In addition, fly ash is cheap raw material, and the costs saving are guaranteed. The strength properties, resistance to a frost or de-icing salts, which are important characteristics in the construction of concrete pavements, have reached the required standards as well. In terms of human health it can´t be stated that a concrete cover with fly ash could be dangerous compared with a cover without fly ash. Final Multi-criteria analysis also pointed that a concrete with fly ash is a clearly proper solution.

Keywords: Life cycle assessment, fly ash, waste, concrete pavements

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3454
283 Foot Anthropometry of Primary School Children in the South of Thailand

Authors: S. Rawangwong, J. Chatthong, W. Boonchouytan

Abstract:

The objective of the research was to study of foot anthropometry of children aged 7-12 years in the South of Thailand Thirty-three dimensions were measured on 305 male and 295 female subjects with 3 age ranges (7-12 years old). The instrumentation consists of four types of anthropometer, digital vernier caliper, digital height gauge and measuring tape. The mean values and standard deviations of average age, height, and weight of the male subjects were 9.52(±1.70) years, 137.80(±11.55) cm, and 37.57(±11.65) kg. Female average age, height, and weight subjects were 9.53(±1.70) years, 137.88(±11.55) cm, and 34.90(±11.57) kg respectively. The comparison of the 33 comparison measured anthropometric. Between male and female subjects were sexual differences in size on women in almost all areas of significance (p<0.05). The comparison of size and proportion elementary school students 11-12 years old men in Southern of Thailand with Thai boys aged 11-12 years of industrial standards at stage 4 year A.D. 2000-2001 Number nine ratio. Concluded that students male in Southern of Thailand has a size different from the proportions of research Industrial Standards. Ministry of Industry, Phase 4, when every year from A.D. 2000-2001 ratio was significantly (p<0.05).All of the feet studied were classified into 4 categories according to the ratios of diagonal foot breadth to the maximum foot length and heel breadth to the foot breadth. They were short but thick, small but long, small, and large. The numbers of the males feet classified in these categories were 86, 64, 40, and 115 persons or 28.20, 20.98, 13.11, and 37.70% respectively. For the female feet, the same values were 46, 59, 81, and 109 persons or 15.59, 20.00, 27.46, and 36.95% respectively.

Keywords: Ergonomics, foot anthropometry, male and female, primary school children

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2875
282 Optimization Modeling of the Hybrid Antenna Array for the DoA Estimation

Authors: Somayeh Komeylian

Abstract:

The direction of arrival (DoA) estimation is the crucial aspect of the radar technologies for detecting and dividing several signal sources. In this scenario, the antenna array output modeling involves numerous parameters including noise samples, signal waveform, signal directions, signal number, and signal to noise ratio (SNR), and thereby the methods of the DoA estimation rely heavily on the generalization characteristic for establishing a large number of the training data sets. Hence, we have analogously represented the two different optimization models of the DoA estimation; (1) the implementation of the decision directed acyclic graph (DDAG) for the multiclass least-squares support vector machine (LS-SVM), and (2) the optimization method of the deep neural network (DNN) radial basis function (RBF). We have rigorously verified that the LS-SVM DDAG algorithm is capable of accurately classifying DoAs for the three classes. However, the accuracy and robustness of the DoA estimation are still highly sensitive to technological imperfections of the antenna arrays such as non-ideal array design and manufacture, array implementation, mutual coupling effect, and background radiation and thereby the method may fail in representing high precision for the DoA estimation. Therefore, this work has a further contribution on developing the DNN-RBF model for the DoA estimation for overcoming the limitations of the non-parametric and data-driven methods in terms of array imperfection and generalization. The numerical results of implementing the DNN-RBF model have confirmed the better performance of the DoA estimation compared with the LS-SVM algorithm. Consequently, we have analogously evaluated the performance of utilizing the two aforementioned optimization methods for the DoA estimation using the concept of the mean squared error (MSE).

Keywords: DoA estimation, adaptive antenna array, Deep Neural Network, LS-SVM optimization model, radial basis function, MSE.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 462
281 Freshwater Lens Observation: Case Study of Laura Island, Majuro Atoll, Republic of the Marshall Islands

Authors: Kazuhisa Koda, Tsutomu Kobayashi, Rebecca Lorennji, Alington Robert, Halston DeBrum, Julious Lucky, Paul Paul

Abstract:

Atolls are low-lying small islands with highly permeable ground that does not allow rivers and lakes to develop. As the water resources on these atolls basically rely on precipitation, groundwater becomes a very important water resource during droughts. Freshwater lenses develop as groundwater on relatively large atoll islands and play a key role in the stable water supply. Atoll islands in the Pacific Ocean sometimes suffer from drought due to El Nino. The global warming effects are noticeable, particularly on atoll islands. The Republic of the Marshall Islands in Oceania is burdened with the problems common to atoll islands. About half of its population lives in the capital, Majuro, and securing water resources for these people is a crucial issue. There is a freshwater lens on the largest, Laura Island, which serves as a water source for the downtown area. A serious drought that occurred in 1998 resulted in excessive water intake from the freshwater lens on Laura Island causing up-coning. Up-coning mixes saltwater into groundwater pumped from water-intake wells. Because up-coning makes the freshwater lens unusable, there was a need to investigate the freshwater lens on Laura Island. In this study, we observed the electrical conductivities of the groundwater at different depths in existing monitoring wells to determine the total storage volume of the freshwater lens on Laura Island from 2010 to 2013. Our results indicated that most of the groundwater that seeped into the freshwater lens had flowed out into the sea.

Keywords: Atoll islands, drought, El-Nino, freshwater lens, groundwater observation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1370
280 A Novel Neighborhood Defined Feature Selection on Phase Congruency Images for Recognition of Faces with Extreme Variations

Authors: Satyanadh Gundimada, Vijayan K Asari

Abstract:

A novel feature selection strategy to improve the recognition accuracy on the faces that are affected due to nonuniform illumination, partial occlusions and varying expressions is proposed in this paper. This technique is applicable especially in scenarios where the possibility of obtaining a reliable intra-class probability distribution is minimal due to fewer numbers of training samples. Phase congruency features in an image are defined as the points where the Fourier components of that image are maximally inphase. These features are invariant to brightness and contrast of the image under consideration. This property allows to achieve the goal of lighting invariant face recognition. Phase congruency maps of the training samples are generated and a novel modular feature selection strategy is implemented. Smaller sub regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are arranged in the order of increasing distance between the sub regions involved in merging. The assumption behind the proposed implementation of the region merging and arrangement strategy is that, local dependencies among the pixels are more important than global dependencies. The obtained feature sets are then arranged in the decreasing order of discriminating capability using a criterion function, which is the ratio of the between class variance to the within class variance of the sample set, in the PCA domain. The results indicate high improvement in the classification performance compared to baseline algorithms.

Keywords: Discriminant analysis, intra-class probability distribution, principal component analysis, phase congruency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1809
279 Opponent Color and Curvelet Transform Based Image Retrieval System Using Genetic Algorithm

Authors: Yesubai Rubavathi Charles, Ravi Ramraj

Abstract:

In order to retrieve images efficiently from a large database, a unique method integrating color and texture features using genetic programming has been proposed. Opponent color histogram which gives shadow, shade, and light intensity invariant property is employed in the proposed framework for extracting color features. For texture feature extraction, fast discrete curvelet transform which captures more orientation information at different scales is incorporated to represent curved like edges. The recent scenario in the issues of image retrieval is to reduce the semantic gap between user’s preference and low level features. To address this concern, genetic algorithm combined with relevance feedback is embedded to reduce semantic gap and retrieve user’s preference images. Extensive and comparative experiments have been conducted to evaluate proposed framework for content based image retrieval on two databases, i.e., COIL-100 and Corel-1000. Experimental results clearly show that the proposed system surpassed other existing systems in terms of precision and recall. The proposed work achieves highest performance with average precision of 88.2% on COIL-100 and 76.3% on Corel, the average recall of 69.9% on COIL and 76.3% on Corel. Thus, the experimental results confirm that the proposed content based image retrieval system architecture attains better solution for image retrieval.

Keywords: Content based image retrieval, Curvelet transform, Genetic algorithm, Opponent color histogram, Relevance feedback.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1785
278 A Probabilistic Reinforcement-Based Approach to Conceptualization

Authors: Hadi Firouzi, Majid Nili Ahmadabadi, Babak N. Araabi

Abstract:

Conceptualization strengthens intelligent systems in generalization skill, effective knowledge representation, real-time inference, and managing uncertain and indefinite situations in addition to facilitating knowledge communication for learning agents situated in real world. Concept learning introduces a way of abstraction by which the continuous state is formed as entities called concepts which are connected to the action space and thus, they illustrate somehow the complex action space. Of computational concept learning approaches, action-based conceptualization is favored because of its simplicity and mirror neuron foundations in neuroscience. In this paper, a new biologically inspired concept learning approach based on the probabilistic framework is proposed. This approach exploits and extends the mirror neuron-s role in conceptualization for a reinforcement learning agent in nondeterministic environments. In the proposed method, instead of building a huge numerical knowledge, the concepts are learnt gradually from rewards through interaction with the environment. Moreover the probabilistic formation of the concepts is employed to deal with uncertain and dynamic nature of real problems in addition to the ability of generalization. These characteristics as a whole distinguish the proposed learning algorithm from both a pure classification algorithm and typical reinforcement learning. Simulation results show advantages of the proposed framework in terms of convergence speed as well as generalization and asymptotic behavior because of utilizing both success and failures attempts through received rewards. Experimental results, on the other hand, show the applicability and effectiveness of the proposed method in continuous and noisy environments for a real robotic task such as maze as well as the benefits of implementing an incremental learning scenario in artificial agents.

Keywords: Concept learning, probabilistic decision making, reinforcement learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1485
277 Assessing the Effect of the Position of the Cavities on the Inner Plate of the Steel Shear Wall under Time History Dynamic Analysis

Authors: Masoud Mahdavi, Mojtaba Farzaneh Moghadam

Abstract:

The seismic forces caused by the waves created in the depths of the earth during the earthquake hit the structure and cause the building to vibrate. Creating large seismic forces will cause low-strength sections in the structure to suffer extensive surface damage. The use of new steel shear walls in steel structures has caused the strength of the building and its main members (columns) to increase due to the reduction and depreciation of seismic forces during earthquakes. In the present study, an attempt was made to evaluate a type of steel shear wall that has regular holes in the inner sheet by modeling the finite element model with Abacus software. The shear wall of the steel plate, measuring 6000 × 3000 mm (one floor) and 3 mm thickness, was modeled with four different pores with a cross-sectional area. The shear wall was dynamically subjected to a time history of 5 seconds by three accelerators, El Centro, Imperial Valley and Kobe. The results showed that increasing the distance between the geometric center of the hole and the geometric center of the inner plate in the steel shear wall (increasing the RCS index) caused the total maximum acceleration to be transferred from the perimeter of the hole to horizontal and vertical beams. The results also show that there is no direct relationship between RCS index and total acceleration in steel shear wall and RCS index is separate from the peak ground acceleration value of earthquake.

Keywords: Hollow Steel plate shear wall, time history analysis, finite element method, Abaqus Software.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 514
276 Very Large Scale Integration Architecture of Finite Impulse Response Filter Implementation Using Retiming Technique

Authors: S. Jalaja, A. M. Vijaya Prakash

Abstract:

Recursive combination of an algorithm based on Karatsuba multiplication is exploited to design a generalized transpose and parallel Finite Impulse Response (FIR) Filter. Mid-range Karatsuba multiplication and Carry Save adder based on Karatsuba multiplication reduce time complexity for higher order multiplication implemented up to n-bit. As a result, we design modified N-tap Transpose and Parallel Symmetric FIR Filter Structure using Karatsuba algorithm. The mathematical formulation of the FFA Filter is derived. The proposed architecture involves significantly less area delay product (APD) then the existing block implementation. By adopting retiming technique, hardware cost is reduced further. The filter architecture is designed by using 90 nm technology library and is implemented by using cadence EDA Tool. The synthesized result shows better performance for different word length and block size. The design achieves switching activity reduction and low power consumption by applying with and without retiming for different combination of the circuit. The proposed structure achieves more than a half of the power reduction by adopting with and without retiming techniques compared to the earlier design structure. As a proof of the concept for block size 16 and filter length 64 for CKA method, it achieves a 51% as well as 70% less power by applying retiming technique, and for CSA method it achieves a 57% as well as 77% less power by applying retiming technique compared to the previously proposed design.

Keywords: Carry save adder Karatsuba multiplication, mid-range Karatsuba multiplication, modified FFA, transposed filter, retiming.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 859
275 Three-Phase High Frequency AC Conversion Circuit with Dual Mode PWM/PDM Control Strategy for High Power IH Applications

Authors: Nabil A. Ahmed

Abstract:

This paper presents a novel three-phase utility frequency to high frequency soft switching power conversion circuit with dual mode pulse width modulation and pulse density modulation for high power induction heating applications as melting of steel and non ferrous metals, annealing of metals, surface hardening of steel and cast iron work pieces and hot water producers, steamers and super heated steamers. This high frequency power conversion circuit can operate from three-phase systems to produce high current for high power induction heating applications under the principles of ZVS and it can regulate its ac output power from the rated value to a low power level. A dual mode modulation control scheme based on high frequency PWM in synchronization with the utility frequency positive and negative half cycles for the proposed high frequency conversion circuit and utility frequency pulse density modulation is produced to extend its soft switching operating range for wide ac output power regulation. A dual packs heat exchanger assembly is designed to be used in consumer and industrial fluid pipeline systems and it is proved to be suitable for the hot water, steam and super heated steam producers. Experiment and simulation results are given in this paper to verify the operation principles of the proposed ac conversion circuit and to evaluate its power regulation and conversion efficiency. Also, the paper presents a mutual coupling model of the induction heating load instead of equivalent transformer circuit model.

Keywords: Induction heating, three-phase, conversion circuit, pulse width modulation, pulse density modulation, high frequency, soft switching.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2138
274 Human Factors Considerations in New Generation Fighter Planes to Enhance Combat Effectiveness

Authors: Chitra Rajagopal, Indra Deo Kumar, Ruchi Joshi, Binoy Bhargavan

Abstract:

Role of fighter planes in modern network centric military warfare scenarios has changed significantly in the recent past. New generation fighter planes have multirole capability of engaging both air and ground targets with high precision. Multirole aircraft undertakes missions such as Air to Air combat, Air defense, Air to Surface role (including Air interdiction, Close air support, Maritime attack, Suppression and Destruction of enemy air defense), Reconnaissance, Electronic warfare missions, etc. Designers have primarily focused on development of technologies to enhance the combat performance of the fighter planes and very little attention is given to human factor aspects of technologies. Unique physical and psychological challenges are imposed on the pilots to meet operational requirements during these missions. Newly evolved technologies have enhanced aircraft performance in terms of its speed, firepower, stealth, electronic warfare, situational awareness, and vulnerability reduction capabilities. This paper highlights the impact of emerging technologies on human factors for various military operations and missions. Technologies such as ‘cooperative knowledge-based systems’ to aid pilot’s decision making in military conflict scenarios as well as simulation technologies to enhance human performance is also studied as a part of research work. Current and emerging pilot protection technologies and systems which form part of the integrated life support systems in new generation fighter planes is discussed. System safety analysis application to quantify the human reliability in military operations is also studied.

Keywords: Combat effectiveness, emerging technologies, human factors, systems safety analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1139
273 Liquid Chromatography Microfluidics for Detection and Quantification of Urine Albumin Using Linear Regression Method

Authors: Patricia B. Cruz, Catrina Jean G. Valenzuela, Analyn N. Yumang

Abstract:

Nearly a hundred per million of the Filipino population is diagnosed with Chronic Kidney Disease (CKD). The early stage of CKD has no symptoms and can only be discovered once the patient undergoes urinalysis. Over the years, different methods were discovered and used for the quantification of the urinary albumin such as the immunochemical assays where most of these methods require large machinery that has a high cost in maintenance and resources, and a dipstick test which is yet to be proven and is still debated as a reliable method in detecting early stages of microalbuminuria. This research study involves the use of the liquid chromatography concept in microfluidic instruments with biosensor as a means of separation and detection respectively, and linear regression to quantify human urinary albumin. The researchers’ main objective was to create a miniature system that quantifies and detect patients’ urinary albumin while reducing the amount of volume used per five test samples. For this study, 30 urine samples of unknown albumin concentrations were tested using VITROS Analyzer and the microfluidic system for comparison. Based on the data shared by both methods, the actual vs. predicted regression were able to create a positive linear relationship with an R2 of 0.9995 and a linear equation of y = 1.09x + 0.07, indicating that the predicted values and actual values are approximately equal. Furthermore, the microfluidic instrument uses 75% less in total volume – sample and reagents combined, compared to the VITROS Analyzer per five test samples.

Keywords: Chronic kidney disease, microfluidics, linear regression, VITROS analyzer, urinary albumin.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 790
272 A Study on the Effectiveness of Alternative Commercial Ventilation Inlets That Improve Energy Efficiency of Building Ventilation Systems

Authors: Brian Considine, Aonghus McNabola, John Gallagher, Prashant Kumar

Abstract:

Passive air pollution control devices known as aspiration efficiency reducers (AER) have been developed using aspiration efficiency (AE) concepts. Their purpose is to reduce the concentration of particulate matter (PM) drawn into a building air handling unit (AHU) through alterations in the inlet design improving energy consumption. In this paper an examination is conducted into the effect of installing a deflector system around an AER-AHU inlet for both a forward and rear-facing orientations relative to the wind. The results of the study found that these deflectors are an effective passive control method for reducing AE at various ambient wind speeds over a range of microparticles of varying diameter. The deflector system was found to induce a large wake zone at low ambient wind speeds for a rear-facing AER-AHU, resulting in significantly lower AE in comparison to without. As the wind speed increased, both contained a wake zone but have much lower concentration gradients with the deflectors. For the forward-facing models, the deflector system at low ambient wind speed was preferred at higher Stokes numbers but there was negligible difference as the Stokes number decreased. Similarly, there was no significant difference at higher wind speeds across the Stokes number range tested. The results demonstrate that a deflector system is a viable passive control method for the reduction of ventilation energy consumption.

Keywords: Aspiration efficiency, energy, particulate matter, ventilation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 421
271 Elliptical Features Extraction Using Eigen Values of Covariance Matrices, Hough Transform and Raster Scan Algorithms

Authors: J. Prakash, K. Rajesh

Abstract:

In this paper, we introduce a new method for elliptical object identification. The proposed method adopts a hybrid scheme which consists of Eigen values of covariance matrices, Circular Hough transform and Bresenham-s raster scan algorithms. In this approach we use the fact that the large Eigen values and small Eigen values of covariance matrices are associated with the major and minor axial lengths of the ellipse. The centre location of the ellipse can be identified using circular Hough transform (CHT). Sparse matrix technique is used to perform CHT. Since sparse matrices squeeze zero elements and contain a small number of nonzero elements they provide an advantage of matrix storage space and computational time. Neighborhood suppression scheme is used to find the valid Hough peaks. The accurate position of circumference pixels is identified using raster scan algorithm which uses the geometrical symmetry property. This method does not require the evaluation of tangents or curvature of edge contours, which are generally very sensitive to noise working conditions. The proposed method has the advantages of small storage, high speed and accuracy in identifying the feature. The new method has been tested on both synthetic and real images. Several experiments have been conducted on various images with considerable background noise to reveal the efficacy and robustness. Experimental results about the accuracy of the proposed method, comparisons with Hough transform and its variants and other tangential based methods are reported.

Keywords: Circular Hough transform, covariance matrix, Eigen values, ellipse detection, raster scan algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2601
270 A Challenge to Acquire Serious Victims’ Locations during Acute Period of Giant Disasters

Authors: Keiko Shimazu, Yasuhiro Maida, Tetsuya Sugata, Daisuke Tamakoshi, Kenji Makabe, Haruki Suzuki

Abstract:

In this paper, we report how to acquire serious victims’ locations in the Acute Stage of Large-scale Disasters, in an Emergency Information Network System designed by us. The background of our concept is based on the Great East Japan Earthquake occurred on March 11th, 2011. Through many experiences of national crises caused by earthquakes and tsunamis, we have established advanced communication systems and advanced disaster medical response systems. However, Japan was devastated by huge tsunamis swept a vast area of Tohoku causing a complete breakdown of all the infrastructures including telecommunications. Therefore, we noticed that we need interdisciplinary collaboration between science of disaster medicine, regional administrative sociology, satellite communication technology and systems engineering experts. Communication of emergency information was limited causing a serious delay in the initial rescue and medical operation. For the emergency rescue and medical operations, the most important thing is to identify the number of casualties, their locations and status and to dispatch doctors and rescue workers from multiple organizations. In the case of the Tohoku earthquake, the dispatching mechanism and/or decision support system did not exist to allocate the appropriate number of doctors and locate disaster victims. Even though the doctors and rescue workers from multiple government organizations have their own dedicated communication system, the systems are not interoperable.

Keywords: Crisis management, disaster mitigation, messing, MGRS, Satellite communication system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 772
269 Design and Analysis of a Piezoelectric Linear Motor Based on Rigid Clamping

Authors: Chao Yi, Cunyue Lu, Lingwei Quan

Abstract:

Piezoelectric linear motors have the characteristics of great electromagnetic compatibility, high positioning accuracy, compact structure and no deceleration mechanism, which make it promising to applicate in micro-miniature precision drive systems. However, most piezoelectric motors are employed by flexible clamping, which has insufficient rigidity and is difficult to use in rapid positioning. Another problem is that this clamping method seriously affects the vibration efficiency of the vibrating unit. In order to solve these problems, this paper proposes a piezoelectric stack linear motor based on double-end rigid clamping. First, a piezoelectric linear motor with a length of only 35.5 mm is designed. This motor is mainly composed of a motor stator, a driving foot, a ceramic friction strip, a linear guide, a pre-tightening mechanism and a base. This structure is much simpler and smaller than most similar motors, and it is easy to assemble as well as to realize precise control. In addition, the properties of piezoelectric stack are reviewed and in order to obtain the elliptic motion trajectory of the driving head, a driving scheme of the longitudinal-shear composite stack is innovatively proposed. Finally, impedance analysis and speed performance testing were performed on the piezoelectric linear motor prototype. The motor can measure speed up to 25.5 mm/s under the excitation of signal voltage of 120 V and frequency of 390 Hz. The result shows that the proposed piezoelectric stacked linear motor obtains great performance. It can run smoothly in a large speed range, which is suitable for various precision control in medical images, aerospace, precision machinery and many other fields.

Keywords: Elliptical trajectory, linear motor, piezoelectric stack, rigid clamping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 649
268 Early Depression Detection for Young Adults with a Psychiatric and AI Interdisciplinary Multimodal Framework

Authors: Raymond Xu, Ashley Hua, Andrew Wang, Yuru Lin

Abstract:

During COVID-19, the depression rate has increased dramatically. Young adults are most vulnerable to the mental health effects of the pandemic. Lower-income families have a higher ratio to be diagnosed with depression than the general population, but less access to clinics. This research aims to achieve early depression detection at low cost, large scale, and high accuracy with an interdisciplinary approach by incorporating clinical practices defined by American Psychiatric Association (APA) as well as multimodal AI framework. The proposed approach detected the nine depression symptoms with Natural Language Processing sentiment analysis and a symptom-based Lexicon uniquely designed for young adults. The experiments were conducted on the multimedia survey results from adolescents and young adults and unbiased Twitter communications. The result was further aggregated with the facial emotional cues analyzed by the Convolutional Neural Network on the multimedia survey videos. Five experiments each conducted on 10k data entries reached consistent results with an average accuracy of 88.31%, higher than the existing natural language analysis models. This approach can reach 300+ million daily active Twitter users and is highly accessible by low-income populations to promote early depression detection to raise awareness in adolescents and young adults and reveal complementary cues to assist clinical depression diagnosis.

Keywords: Artificial intelligence, depression detection, facial emotion recognition, natural language processing, mental disorder.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1086
267 Feasibility Study of MongoDB and Radio Frequency Identification Technology in Asset Tracking System

Authors: Mohd Noah A. Rahman, Afzaal H. Seyal, Sharul T. Tajuddin, Hartiny Md Azmi

Abstract:

Taking into consideration the real time situation specifically the higher academic institutions, small, medium to large companies, public to private sectors and the remaining sectors, do experience the inventory or asset shrinkages due to theft, loss or even inventory tracking errors. This happening is due to a zero or poor security systems and measures being taken and implemented in their organizations. Henceforth, implementing the Radio Frequency Identification (RFID) technology into any manual or existing web-based system or web application can simply deter and will eventually solve certain major issues to serve better data retrieval and data access. Having said, this manual or existing system can be enhanced into a mobile-based system or application. In addition to that, the availability of internet connections can aid better services of the system. Such involvement of various technologies resulting various privileges to individuals or organizations in terms of accessibility, availability, mobility, efficiency, effectiveness, real-time information and also security. This paper will look deeper into the integration of mobile devices with RFID technologies with the purpose of asset tracking and control. Next, it is to be followed by the development and utilization of MongoDB as the main database to store data and its association with RFID technology. Finally, the development of a web based system which can be viewed in a mobile based formation with the aid of Hypertext Preprocessor (PHP), MongoDB, Hyper-Text Markup Language 5 (HTML5), Android, JavaScript and AJAX programming language.

Keywords: RFID, asset tracking system, MongoDB, NoSQL.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1598
266 Evolutionary Origin of the αC Helix in Integrins

Authors: B. Chouhan, A. Denesyuk, J. Heino, M. S. Johnson, K. Denessiouk

Abstract:

Integrins are a large family of multidomain α/β cell signaling receptors. Some integrins contain an additional inserted I domain, whose earliest expression appears to be with the chordates, since they are observed in the urochordates Ciona intestinalis (vase tunicate) and Halocynthia roretzi (sea pineapple), but not in integrins of earlier diverging species. The domain-s presence is viewed as a hallmark of integrins of higher metazoans, however in vertebrates, there are clearly three structurally-different classes: integrins without I domains, and two groups of integrins with I domains but separable by the presence or absence of an additional αC helix. For example, the αI domains in collagen-binding integrins from Osteichthyes (bony fish) and all higher vertebrates contain the specific αC helix, whereas the αI domains in non-collagen binding integrins from vertebrates and the αI domains from earlier diverging urochordate integrins, i.e. tunicates, do not. Unfortunately, within the early chordates, there is an evolutionary gap due to extinctions between the tunicates and cartilaginous fish. This, coupled with a knowledge gap due to the lack of complete genomic data from surviving species, means that the origin of collagen-binding αC-containing αI domains remains unknown. Here, we analyzed two available genomes from Callorhinchus milii (ghost shark/elephant shark; Chondrichthyes – cartilaginous fish) and Petromyzon marinus (sea lamprey; Agnathostomata), and several available Expression Sequence Tags from two Chondrichthyes species: Raja erinacea (little skate) and Squalus acanthias (dogfish shark); and Eptatretus burgeri (inshore hagfish; Agnathostomata), which evolutionary reside between the urochordates and osteichthyes. In P. marinus, we observed several fragments coding for the αC-containing αI domain, allowing us to shed more light on the evolution of the collagen-binding integrins.

Keywords: Integrin αI domain, integrin evolution, collagen binding, structure, αC helix

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3627
265 The Potential Use of Nanofilters to Supply Potable Water in Persian Gulf and Oman Sea Watershed Basin

Authors: Sara Zamani, Mojtaba Fazeli, Abdollah Rashidi Mehrabadi

Abstract:

In a world worried about water resources with the shadow of drought and famine looming all around, the quality of water is as important as its quantity. The source of all concerns is the constant reduction of per capita quality water for different uses. Iran With an average annual precipitation of 250 mm compared to the 800 mm world average, Iran is considered a water scarce country and the disparity in the rainfall distribution, the limitations of renewable resources and the population concentration in the margins of desert and water scarce areas have intensified the problem. The shortage of per capita renewable freshwater and its poor quality in large areas of the country, which have saline, brackish or hard water resources, and the profusion of natural and artificial pollutant have caused the deterioration of water quality. Among methods of treatment and use of these waters one can refer to the application of membrane technologies, which have come into focus in recent years due to their great advantages. This process is quite efficient in eliminating multi-capacity ions; and due to the possibilities of production at different capacities, application as treatment process in points of use, and the need for less energy in comparison to Reverse Osmosis processes, it can revolutionize the water and wastewater sector in years to come. The article studied the different capacities of water resources in the Persian Gulf and Oman Sea watershed basins, and processes the possibility of using nanofiltration process to treat brackish and non-conventional waters in these basins.

Keywords: Membrane processes, saline waters, brackish waters, hard waters, zoning water quality in the Persian Gulf and the Oman Sea Watershed area, nanofiltration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1906