Search results for: innovation ecosystem components
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1979

Search results for: innovation ecosystem components

389 Highly Scalable, Reversible and Embedded Image Compression System

Authors: Federico Pérez González, Iñaki Goiricelaia Ordorika, Pedro Iriondo Bengoa

Abstract:

A new method for low complexity image coding is presented, that permits different settings and great scalability in the generation of the final bit stream. This coding presents a continuoustone still image compression system that groups loss and lossless compression making use of finite arithmetic reversible transforms. Both transformation in the space of color and wavelet transformation are reversible. The transformed coefficients are coded by means of a coding system in depending on a subdivision into smaller components (CFDS) similar to the bit importance codification. The subcomponents so obtained are reordered by means of a highly configure alignment system depending on the application that makes possible the re-configure of the elements of the image and obtaining different levels of importance from which the bit stream will be generated. The subcomponents of each level of importance are coded using a variable length entropy coding system (VBLm) that permits the generation of an embedded bit stream. This bit stream supposes itself a bit stream that codes a compressed still image. However, the use of a packing system on the bit stream after the VBLm allows the realization of a final highly scalable bit stream from a basic image level and one or several enhance levels.

Keywords: Image compression, wavelet transform, highlyscalable, reversible transform, embedded, subcomponents.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1415
388 Modeling Directional Thermal Radiance Anisotropy for Urban Canopy

Authors: Limin Zhao, Xingfa Gu, C. Tao Yu

Abstract:

one of the significant factors for improving the accuracy of Land Surface Temperature (LST) retrieval is the correct understanding of the directional anisotropy for thermal radiance. In this paper, the multiple scattering effect between heterogeneous non-isothermal surfaces is described rigorously according to the concept of configuration factor, based on which a directional thermal radiance model is built, and the directional radiant character for urban canopy is analyzed. The model is applied to a simple urban canopy with row structure to simulate the change of Directional Brightness Temperature (DBT). The results show that the DBT is aggrandized because of the multiple scattering effects, whereas the change range of DBT is smoothed. The temperature difference, spatial distribution, emissivity of the components can all lead to the change of DBT. The “hot spot" phenomenon occurs when the proportion of high temperature component in the vision field came to a head. On the other hand, the “cool spot" phenomena occur when low temperature proportion came to the head. The “spot" effect disappears only when the proportion of every component keeps invariability. The model built in this paper can be used for the study of directional effect on emissivity, the LST retrieval over urban areas and the adjacency effect of thermal remote sensing pixels.

Keywords: Directional thermal radiance, multiple scattering, configuration factor, urban canopy, hot spot effect

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1606
387 Influence of Optical Fluence Distribution on Photoacoustic Imaging

Authors: Mohamed K. Metwally, Sherif H. El-Gohary, Kyung Min Byun, Seung Moo Han, Soo Yeol Lee, Min Hyoung Cho, Gon Khang, Jinsung Cho, Tae-Seong Kim

Abstract:

Photoacoustic imaging (PAI) is a non-invasive and non-ionizing imaging modality that combines the absorption contrast of light with ultrasound resolution. Laser is used to deposit optical energy into a target (i.e., optical fluence). Consequently, the target temperature rises, and then thermal expansion occurs that leads to generating a PA signal. In general, most image reconstruction algorithms for PAI assume uniform fluence within an imaging object. However, it is known that optical fluence distribution within the object is non-uniform. This could affect the reconstruction of PA images. In this study, we have investigated the influence of optical fluence distribution on PA back-propagation imaging using finite element method. The uniform fluence was simulated as a triangular waveform within the object of interest. The non-uniform fluence distribution was estimated by solving light propagation within a tissue model via Monte Carlo method. The results show that the PA signal in the case of non-uniform fluence is wider than the uniform case by 23%. The frequency spectrum of the PA signal due to the non-uniform fluence has missed some high frequency components in comparison to the uniform case. Consequently, the reconstructed image with the non-uniform fluence exhibits a strong smoothing effect.

Keywords: Finite Element Method, Fluence Distribution, Monte Carlo Method, Photoacoustic Imaging.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2683
386 Exergetic and Sustainability Evaluation of a Building Heating System in Izmir, Turkey

Authors: Nurdan Yildirim, Arif Hepbasli

Abstract:

Heating, cooling and lighting appliances in buildings account for more than one third of the world’s primary energy demand. Therefore, main components of the building heating systems play an essential role in terms of energy consumption. In this context, efficient energy and exergy utilization in HVAC-R systems has been very essential, especially in developing energy policies towards increasing efficiencies. The main objective of the present study is to assess the performance of a family house with a volume of 326.7 m3 and a net floor area of 121 m2, located in the city of Izmir, Turkey in terms of energetic, exergetic and sustainability aspects. The indoor and exterior air temperatures are taken as 20°C and 1°C, respectively. In the analysis and assessment, various metrics (indices or indicators) such as exergetic efficiency, exergy flexibility ratio and sustainability index are utilized. Two heating options (Case 1: condensing boiler and Case 2: air heat pump) are considered for comparison purposes. The total heat loss rate of the family house is determined to be 3770.72 W. The overall energy efficiencies of the studied cases are calculated to be 49.4% for Case 1 and 54.7% for Case 2. The overall exergy efficiencies, the flexibility factor and the sustainability index of Cases 1 and 2 are computed to be around 3.3%, 0.17 and 1.034, respectively.

Keywords: Buildings, exergy, low exergy, sustainability, efficiency, heating, renewable energy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2060
385 Optimization of Quercus cerris Bark Liquefaction

Authors: Luísa P. Cruz-Lopes, Hugo Costa e Silva, Idalina Domingos, José Ferreira, Luís Teixeira de Lemos, Bruno Esteves

Abstract:

The liquefaction process of cork based tree barks has led to an increase of interest due to its potential innovation in the lumber and wood industries. In this particular study the bark of Quercus cerris (Turkish oak) is used due to its appreciable amount of cork tissue, although of inferior quality when compared to the cork provided by other Quercus trees. This study aims to optimize alkaline catalysis liquefaction conditions, regarding several parameters. To better comprehend the possible chemical characteristics of the bark of Quercus cerris, a complete chemical analysis was performed. The liquefaction process was performed in a double-jacket reactor heated with oil, using glycerol and a mixture of glycerol/ethylene glycol as solvents, potassium hydroxide as a catalyst, and varying the temperature, liquefaction time and granulometry. Due to low liquefaction efficiency resulting from the first experimental procedures a study was made regarding different washing techniques after the filtration process using methanol and methanol/water. The chemical analysis stated that the bark of Quercus cerris is mostly composed by suberin (ca. 30%) and lignin (ca. 24%) as well as insolvent hemicelluloses in hot water (ca. 23%). On the liquefaction stage, the results that led to higher yields were: using a mixture of methanol/ethylene glycol as reagents and a time and temperature of 120 minutes and 200 ºC, respectively. It is concluded that using a granulometry of <80 mesh leads to better results, even if this parameter barely influences the liquefaction efficiency. Regarding the filtration stage, washing the residue with methanol and then distilled water leads to a considerable increase on final liquefaction percentages, which proves that this procedure is effective at liquefying suberin content and lignocellulose fraction.

Keywords: Liquefaction, alkaline catalysis, optimization, Quercus cerris bark.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1493
384 Hydrodynamic Simulation of Co-Current and Counter Current of Column Distillation Using Euler Lagrange Approach

Authors: H. Troudi, M. Ghiss, Z. Tourki, M. Ellejmi

Abstract:

Packed columns of liquefied petroleum gas (LPG) consists of separating the liquid mixture of propane and butane to pure gas components by the distillation phenomenon. The flow of the gas and liquid inside the columns is operated by two ways: The co-current and the counter current operation. Heat, mass and species transfer between phases represent the most important factors that influence the choice between those two operations. In this paper, both processes are discussed using computational CFD simulation through ANSYS-Fluent software. Only 3D half section of the packed column was considered with one packed bed. The packed bed was characterized in our case as a porous media. The simulations were carried out at transient state conditions. A multi-component gas and liquid mixture were used out in the two processes. We utilized the Euler-Lagrange approach in which the gas was treated as a continuum phase and the liquid as a group of dispersed particles. The heat and the mass transfer process was modeled using multi-component droplet evaporation approach. The results show that the counter-current process performs better than the co-current, although such limitations of our approach are noted. This comparison gives accurate results for computations times higher than 2 s, at different gas velocity and at packed bed porosity of 0.9.

Keywords: Co-current, counter current, Euler Lagrange model, heat transfer, mass transfer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1369
383 Reliability Analysis of Press Unit using Vague Set

Authors: S. P. Sharma, Monica Rani

Abstract:

In conventional reliability assessment, the reliability data of system components are treated as crisp values. The collected data have some uncertainties due to errors by human beings/machines or any other sources. These uncertainty factors will limit the understanding of system component failure due to the reason of incomplete data. In these situations, we need to generalize classical methods to fuzzy environment for studying and analyzing the systems of interest. Fuzzy set theory has been proposed to handle such vagueness by generalizing the notion of membership in a set. Essentially, in a Fuzzy Set (FS) each element is associated with a point-value selected from the unit interval [0, 1], which is termed as the grade of membership in the set. A Vague Set (VS), as well as an Intuitionistic Fuzzy Set (IFS), is a further generalization of an FS. Instead of using point-based membership as in FS, interval-based membership is used in VS. The interval-based membership in VS is more expressive in capturing vagueness of data. In the present paper, vague set theory coupled with conventional Lambda-Tau method is presented for reliability analysis of repairable systems. The methodology uses Petri nets (PN) to model the system instead of fault tree because it allows efficient simultaneous generation of minimal cuts and path sets. The presented method is illustrated with the press unit of the paper mill.

Keywords: Lambda -Tau methodology, Petri nets, repairable system, vague fuzzy set.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1530
382 Development of Piezoelectric Gas Micro Pumps with the PDMS Check Valve Design

Authors: Chiang-Ho Cheng, An-Shik Yang, Hong-Yih Cheng, Ming-Yu Lai

Abstract:

This paper presents the design and fabrication of a novel piezoelectric actuator for a gas micro pump with check valve having the advantages of miniature size, light weight and low power consumption. The micro pump is designed to have eight major components, namely a stainless steel upper cover layer, a piezoelectric actuator, a stainless steel diaphragm, a PDMS chamber layer, two stainless steel channel layers with two valve seats, a PDMS check valve layer with two cantilever-type check valves and an acrylic substrate. A prototype of the gas micro pump, with a size of 52 mm × 50 mm × 5.0 mm, is fabricated by precise manufacturing. This device is designed to pump gases with the capability of performing the self-priming and bubble-tolerant work mode by maximizing the stroke volume of the membrane as well as the compression ratio via minimization of the dead volume of the micro pump chamber and channel. By experiment apparatus setup, we can get the real-time values of the flow rate of micro pump and the displacement of the piezoelectric actuator, simultaneously. The gas micro pump obtained higher output performance under the sinusoidal waveform of 250 Vpp. The micro pump achieved the maximum pumping rates of 1185 ml/min and back pressure of 7.14 kPa at the corresponding frequency of 120 and 50 Hz.

Keywords: PDMS, Check valve, Micro pump, Piezoelectric.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2030
381 Removal of CO2 and H2S using Aqueous Alkanolamine Solusions

Authors: Zare Aliabad, H., Mirzaei, S.

Abstract:

This work presents a theoretical investigation of the simultaneous absorption of CO2 and H2S into aqueous solutions of MDEA and DEA. In this process the acid components react with the basic alkanolamine solution via an exothermic, reversible reaction in a gas/liquid absorber. The use of amine solvents for gas sweetening has been investigated using process simulation programs called HYSYS and ASPEN. We use Electrolyte NRTL and Amine Package and Amines (experimental) equation of state. The effects of temperature and circulation rate and amine concentration and packed column and murphree efficiency on the rate of absorption were studied. When lean amine flow and concentration increase, CO2 and H2S absorption increase too. With the improvement of inlet amine temperature in absorber, CO2 and H2S penetrate to upper stages of absorber and absorption of acid gases in absorber decreases. The CO2 concentration in the clean gas can be greatly influenced by the packing height, whereas for the H2S concentration in the clean gas the packing height plays a minor role. HYSYS software can not estimate murphree efficiency correctly and it applies the same contributions in all diagrams for HYSYS software. By improvement in murphree efficiency, maximum temperature of absorber decrease and the location of reaction transfer to the stages of bottoms absorber and the absorption of acid gases increase.

Keywords: Absorber, DEA, MDEA, Simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17320
380 Stackelberg Security Game for Optimizing Security of Federated Internet of Things Platform Instances

Authors: Violeta Damjanovic-Behrendt

Abstract:

This paper presents an approach for optimal cyber security decisions to protect instances of a federated Internet of Things (IoT) platform in the cloud. The presented solution implements the repeated Stackelberg Security Game (SSG) and a model called Stochastic Human behaviour model with AttRactiveness and Probability weighting (SHARP). SHARP employs the Subjective Utility Quantal Response (SUQR) for formulating a subjective utility function, which is based on the evaluations of alternative solutions during decision-making. We augment the repeated SSG (including SHARP and SUQR) with a reinforced learning algorithm called Naïve Q-Learning. Naïve Q-Learning belongs to the category of active and model-free Machine Learning (ML) techniques in which the agent (either the defender or the attacker) attempts to find an optimal security solution. In this way, we combine GT and ML algorithms for discovering optimal cyber security policies. The proposed security optimization components will be validated in a collaborative cloud platform that is based on the Industrial Internet Reference Architecture (IIRA) and its recently published security model.

Keywords: Security, internet of things, cloud computing, Stackelberg security game, machine learning, Naïve Q-learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1663
379 QSAR Studies of Certain Novel Heterocycles Derived from Bis-1, 2, 4 Triazoles as Anti-Tumor Agents

Authors: Madhusudan Purohit, Stephen Philip, Bharathkumar Inturi

Abstract:

In this paper we report the quantitative structure activity relationship of novel bis-triazole derivatives for predicting the activity profile. The full model encompassed a dataset of 46 Bis- triazoles. Tripos Sybyl X 2.0 program was used to conduct CoMSIA QSAR modeling. The Partial Least-Squares (PLS) analysis method was used to conduct statistical analysis and to derive a QSAR model based on the field values of CoMSIA descriptor. The compounds were divided into test and training set. The compounds were evaluated by various CoMSIA parameters to predict the best QSAR model. An optimum numbers of components were first determined separately by cross-validation regression for CoMSIA model, which were then applied in the final analysis. A series of parameters were used for the study and the best fit model was obtained using donor, partition coefficient and steric parameters. The CoMSIA models demonstrated good statistical results with regression coefficient (r2) and the cross-validated coefficient (q2) of 0.575 and 0.830 respectively. The standard error for the predicted model was 0.16322. In the CoMSIA model, the steric descriptors make a marginally larger contribution than the electrostatic descriptors. The finding that the steric descriptor is the largest contributor for the CoMSIA QSAR models is consistent with the observation that more than half of the binding site area is occupied by steric regions.

Keywords: 3D QSAR, CoMSIA, Triazoles.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1484
378 The Mechanical and Electrochemical Properties of DC-Electrodeposited Ni-Mn Alloy Coating with Low Internal Stress

Authors: Chun-Ying Lee, Kuan-Hui Cheng, Mei-Wen Wu

Abstract:

The nickel-manganese (Ni-Mn) alloy coating prepared from DC electrodeposition process in sulphamate bath was studied. The effects of process parameters, such as current density and electrolyte composition, on the cathodic current efficiency, microstructure, internal stress and mechanical properties were investigated. Because of its crucial effect on the application to the electroforming of microelectronic components, the development of low internal stress coating with high leveling power was emphasized. It was found that both the coating’s manganese content and the cathodic current efficiency increased with the raise in current density. In addition, the internal stress of the deposited coating showed compressive nature at low current densities while changed to tensile one at higher current densities. Moreover, the metallographic observation, X-ray diffraction measurement, and polarization curve measurement were conducted. It was found that the Ni-Mn coating consisted of nano-sized columnar grains and the maximum hardness of the coating was associated with (111) preferred orientation in the microstructure. The grain size was refined along with the increase in the manganese content of the coating, which accordingly, raised its hardness and resistance to annealing softening. In summary, the Ni-Mn coating prepared at lower current density of 1-2 A/dm2 had low internal stress, high leveling power, and better corrosion resistance.

Keywords: DC plating, internal stress, leveling power, Ni-Mn coating.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2024
377 Optimization of Double Wishbone Suspension System with Variable Camber Angle by Hydraulic Mechanism

Authors: Mohammad Iman Mokhlespour Esfahani, Masoud Mosayebi, Mohammad Pourshams, Ahmad Keshavarzi

Abstract:

Simulation accuracy by recent dynamic vehicle simulation multidimensional expression significantly has progressed and acceptable results not only for passive vehicles but also for active vehicles normally equipped with advanced electronic components is also provided. Recently, one of the subjects that has it been considered, is increasing the safety car in design. Therefore, many efforts have been done to increase vehicle stability especially in the turn. One of the most important efforts is adjusting the camber angle in the car suspension system. Optimum control camber angle in addition to the vehicle stability is effective in the wheel adhesion on road, reducing rubber abrasion and acceleration and braking. Since the increase or decrease in the camber angle impacts on the stability of vehicles, in this paper, a car suspension system mechanism is introduced that could be adjust camber angle and the mechanism is application and also inexpensive. In order to reach this purpose, in this paper, a passive double wishbone suspension system with variable camber angle is introduced and then variable camber mechanism designed and analyzed for study the designed system performance, this mechanism is modeled in Visual Nastran software and kinematic analysis is revealed.

Keywords: Suspension molding, double wishbone, variablecamber, hydraulic mechanism

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7728
376 Shock Induced Damage onto Free-Standing Objects in an Earthquake

Authors: Haider AlAbadi, Joe Petrolito, Nelson Lam, Emad Gad

Abstract:

In areas of low to moderate seismicity many building contents and equipment are not positively fixed to the floor or tied to adjacent walls. Under seismic induced horizontal vibration, such contents and equipment can suffer from damage by either overturning or impact associated with rocking. This paper focuses on the estimation of shock on typical contents and equipment due to rocking. A simplified analytical model is outlined that can be used to estimate the maximum acceleration on a rocking object given its basic geometric and mechanical properties. The developed model was validated against experimental results. The experimental results revealed that the maximum shock acceleration can be underestimated if the static stiffness of the materials at the interface between the rocking object and floor is used rather than the dynamic stiffness. Excellent agreement between the model and experimental results was found when the dynamic stiffness for the interface material was used, which was found to be generally much higher than corresponding static stiffness under different investigated boundary conditions of the cushion. The proposed model can be a beneficial tool in performing a rapid assessment of shock sensitive components considered for possible seismic rectification. 

Keywords: Impact, shock, earthquakes, rocking, building contents, overturning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1824
375 Fast Wavelet Image Denoising Based on Local Variance and Edge Analysis

Authors: Gaoyong Luo

Abstract:

The approach based on the wavelet transform has been widely used for image denoising due to its multi-resolution nature, its ability to produce high levels of noise reduction and the low level of distortion introduced. However, by removing noise, high frequency components belonging to edges are also removed, which leads to blurring the signal features. This paper proposes a new method of image noise reduction based on local variance and edge analysis. The analysis is performed by dividing an image into 32 x 32 pixel blocks, and transforming the data into wavelet domain. Fast lifting wavelet spatial-frequency decomposition and reconstruction is developed with the advantages of being computationally efficient and boundary effects minimized. The adaptive thresholding by local variance estimation and edge strength measurement can effectively reduce image noise while preserve the features of the original image corresponding to the boundaries of the objects. Experimental results demonstrate that the method performs well for images contaminated by natural and artificial noise, and is suitable to be adapted for different class of images and type of noises. The proposed algorithm provides a potential solution with parallel computation for real time or embedded system application.

Keywords: Edge strength, Fast lifting wavelet, Image denoising, Local variance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2030
374 CRYPTO COPYCAT: A Fashion Centric Blockchain Framework for Eliminating Fashion Infringement

Authors: Magdi Elmessiry, Adel Elmessiry

Abstract:

The fashion industry represents a significant portion of the global gross domestic product, however, it is plagued by cheap imitators that infringe on the trademarks which destroys the fashion industry's hard work and investment. While eventually the copycats would be found and stopped, the damage has already been done, sales are missed and direct and indirect jobs are lost. The infringer thrives on two main facts: the time it takes to discover them and the lack of tracking technologies that can help the consumer distinguish them. Blockchain technology is a new emerging technology that provides a distributed encrypted immutable and fault resistant ledger. Blockchain presents a ripe technology to resolve the infringement epidemic facing the fashion industry. The significance of the study is that a new approach leveraging the state of the art blockchain technology coupled with artificial intelligence is used to create a framework addressing the fashion infringement problem. It transforms the current focus on legal enforcement, which is difficult at best, to consumer awareness that is far more effective. The framework, Crypto CopyCat, creates an immutable digital asset representing the actual product to empower the customer with a near real time query system. This combination emphasizes the consumer's awareness and appreciation of the product's authenticity, while provides real time feedback to the producer regarding the fake replicas. The main findings of this study are that implementing this approach can delay the fake product penetration of the original product market, thus allowing the original product the time to take advantage of the market. The shift in the fake adoption results in reduced returns, which impedes the copycat market and moves the emphasis to the original product innovation.

Keywords: Fashion, infringement, Blockchain, artificial intelligence, textiles supply.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1247
373 In Cognitive Radio the Analysis of Bit-Error- Rate (BER) by using PSO Algorithm

Authors: Shrikrishan Yadav, Akhilesh Saini, Krishna Chandra Roy

Abstract:

The electromagnetic spectrum is a natural resource and hence well-organized usage of the limited natural resources is the necessities for better communication. The present static frequency allocation schemes cannot accommodate demands of the rapidly increasing number of higher data rate services. Therefore, dynamic usage of the spectrum must be distinguished from the static usage to increase the availability of frequency spectrum. Cognitive radio is not a single piece of apparatus but it is a technology that can incorporate components spread across a network. It offers great promise for improving system efficiency, spectrum utilization, more effective applications, reduction in interference and reduced complexity of usage for users. Cognitive radio is aware of its environmental, internal state, and location, and autonomously adjusts its operations to achieve designed objectives. It first senses its spectral environment over a wide frequency band, and then adapts the parameters to maximize spectrum efficiency with high performance. This paper only focuses on the analysis of Bit-Error-Rate in cognitive radio by using Particle Swarm Optimization Algorithm. It is theoretically as well as practically analyzed and interpreted in the sense of advantages and drawbacks and how BER affects the efficiency and performance of the communication system.

Keywords: BER, Cognitive Radio, Environmental Parameters, PSO, Radio spectrum, Transmission Parameters

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2159
372 The Role of Velocity Map Quality in Estimation of Intravascular Pressure Distribution

Authors: Ali Pashaee, Parisa Shooshtari, Gholamreza Atae, Nasser Fatouraee

Abstract:

Phase-Contrast MR imaging methods are widely used for measurement of blood flow velocity components. Also there are some other tools such as CT and Ultrasound for velocity map detection in intravascular studies. These data are used in deriving flow characteristics. Some clinical applications are investigated which use pressure distribution in diagnosis of intravascular disorders such as vascular stenosis. In this paper an approach to the problem of measurement of intravascular pressure field by using velocity field obtained from flow images is proposed. The method presented in this paper uses an algorithm to calculate nonlinear equations of Navier- Stokes, assuming blood as an incompressible and Newtonian fluid. Flow images usually suffer the lack of spatial resolution. Our attempt is to consider the effect of spatial resolution on the pressure distribution estimated from this method. In order to achieve this aim, velocity map of a numerical phantom is derived at six different spatial resolutions. To determine the effects of vascular stenoses on pressure distribution, a stenotic phantom geometry is considered. A comparison between the pressure distribution obtained from the phantom and the pressure resulted from the algorithm is presented. In this regard we also compared the effects of collocated and staggered computational grids on the pressure distribution resulted from this algorithm.

Keywords: Flow imaging, pressure distribution estimation, phantom, resolution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1684
371 Soil Moisture Control System: A Product Development Approach

Authors: Swapneel U. Naphade, Dushyant A. Patil, Satyabodh M. Kulkarni

Abstract:

In this work, we propose the concept and geometrical design of a soil moisture control system (SMCS) module by following the product development approach to develop an inexpensive, easy to use and quick to install product targeted towards agriculture practitioners. The module delivers water to the agricultural land efficiently by sensing the soil moisture and activating the delivery valve. We start with identifying the general needs of the potential customer. Then, based on customer needs we establish product specifications and identify important measuring quantities to evaluate our product. Keeping in mind the specifications, we develop various conceptual solutions of the product and select the best solution through concept screening and selection matrices. Then, we develop the product architecture by integrating the systems into the final product. In the end, the geometric design is done using human factors engineering concepts like heuristic analysis, task analysis, and human error reduction analysis. The result of human factors analysis reveals the remedies which should be applied while designing the geometry and software components of the product. We find that to design the best grip in terms of comfort and applied force, for a power-type grip, a grip-diameter of 35 mm is the most ideal.

Keywords: Agriculture, human factors, product design, soil moisture control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1317
370 Fatigue Strength of S275 Mild Steel under Cyclic Loading

Authors: T. Aldeeb, M. Abduelmula

Abstract:

This study examines the fatigue life of S275 mild steel at room temperature. Mechanical components can fail under cyclic loading during period of time, known as the fatigue phenomenon. In order to prevent fatigue induced failures, material behavior should be investigated to determine the endurance limit of the material for safe design and infinite life, thus leading to reducing the economic cost and loss in human lives. The fatigue behavior of S275 mild steel was studied and investigated. Specimens were prepared in accordance with ASTM E3-11, and fatigue tests of the specimen were conducted in accordance with ASTM E466-07 on a smooth plate, with a continuous radius between ends (hourglass-shaped plate). The method of fatigue testing was applied with constant load amplitude and constant frequency of 4 Hz with load ratio (Fully Reversal R= -1). Surface fractures of specimens were investigated using Scanning Electron Microscope (SEM). The experimental results were compared with the results of a Finite Element Analysis (FEA), using simulation software. The experiment results indicated that the endurance fatigue limit of S275 mild steel was 195.47 MPa.

Keywords: Fatigue life, fatigue strength, finite element analysis, S275 mild steel, scanning electron microscope.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2458
369 Automatic Visualization Pipeline Formation for Medical Datasets on Grid Computing Environment

Authors: Aboamama Atahar Ahmed, Muhammad Shafie Abd Latiff, Kamalrulnizam Abu Bakar, Zainul AhmadRajion

Abstract:

Distance visualization of large datasets often takes the direction of remote viewing and zooming techniques of stored static images. However, the continuous increase in the size of datasets and visualization operation causes insufficient performance with traditional desktop computers. Additionally, the visualization techniques such as Isosurface depend on the available resources of the running machine and the size of datasets. Moreover, the continuous demand for powerful computing powers and continuous increase in the size of datasets results an urgent need for a grid computing infrastructure. However, some issues arise in current grid such as resources availability at the client machines which are not sufficient enough to process large datasets. On top of that, different output devices and different network bandwidth between the visualization pipeline components often result output suitable for one machine and not suitable for another. In this paper we investigate how the grid services could be used to support remote visualization of large datasets and to break the constraint of physical co-location of the resources by applying the grid computing technologies. We show our grid enabled architecture to visualize large medical datasets (circa 5 million polygons) for remote interactive visualization on modest resources clients.

Keywords: Visualization, Grid computing, Medical datasets, visualization techniques, thin clients, Globus toolkit, VTK.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1754
368 Creation of Greater Mekong Subregion Regional Competitiveness through Cluster Mapping

Authors: Danuvasin Charoen

Abstract:

This research investigates cluster development in the area called the Greater Mekong Subregion (GMS), which consists of Thailand, the People’s Republic of China (PRC), the Yunnan Province and Guangxi Zhuang Autonomous Region, Myanmar, the Lao People’s Democratic Republic (Lao PDR), Cambodia, and Vietnam. The study utilized Porter’s competitiveness theory and the cluster mapping approach to analyze the competitiveness of the region. The data collection consists of interviews, focus groups, and the analysis of secondary data. The findings identify some evidence of cluster development in the GMS; however, there is no clear indication of collaboration among the components in the clusters. GMS clusters tend to be stand-alone. The clusters in Vietnam, Lao PDR, Myanmar, and Cambodia tend to be labor intensive, whereas the clusters in Thailand and the PRC (Yunnan) have the potential to successfully develop into innovative clusters. The collaboration and integration among the clusters in the GMS area are promising, though it could take a long time. The most likely relationship between the GMS countries could be, for example, suppliers of the low-end, labor-intensive products will be located in the low income countries such as Myanmar, Lao PDR, and Cambodia, and these countries will be providing input materials for innovative clusters in the middle income countries such as Thailand and the PRC.

Keywords: Greater Mekong Subregion, competitiveness, cluster, development.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1075
367 Factors Affecting M-Government Deployment and Adoption

Authors: Saif Obaid Alkaabi, Nabil Ayad

Abstract:

Governments constantly seek to offer faster, more secure, efficient and effective services for their citizens. Recent changes and developments to communication services and technologies, mainly due the Internet, have led to immense improvements in the way governments of advanced countries carry out their interior operations Therefore, advances in e-government services have been broadly adopted and used in various developed countries, as well as being adapted to developing countries. The implementation of advances depends on the utilization of the most innovative structures of data techniques, mainly in web dependent applications, to enhance the main functions of governments. These functions, in turn, have spread to mobile and wireless techniques, generating a new advanced direction called m-government. This paper discusses a selection of available m-government applications and several business modules and frameworks in various fields. Practically, the m-government models, techniques and methods have become the improved version of e-government. M-government offers the potential for applications which will work better, providing citizens with services utilizing mobile communication and data models incorporating several government entities. Developing countries can benefit greatly from this innovation due to the fact that a large percentage of their population is young and can adapt to new technology and to the fact that mobile computing devices are more affordable. The use of models of mobile transactions encourages effective participation through the use of mobile portals by businesses, various organizations, and individual citizens. Although the application of m-government has great potential, it does have major limitations. The limitations include: the implementation of wireless networks and relative communications, the encouragement of mobile diffusion, the administration of complicated tasks concerning the protection of security (including the ability to offer privacy for information), and the management of the legal issues concerning mobile applications and the utilization of services.

Keywords: E-government, m-government, system dependability, system security, trust.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1781
366 Error Factors in Vertical Positioning System

Authors: Hyun-Gwang Cho, Wan-Seok Yang, Su-Jin Kim, Jeong-Seok Oh, Chun-Hong Park

Abstract:

Machine tools are improved capacity remarkably during the 20th century. Improving the precision of machine tools are related with precision of products and accurate processing is always associated with the subject of interest. There are a lot of the elements that determine the precision of the machine, as guides, motors, structure, control, etc. In this paper we focused on the phenomenon that vertical movement system has worse precision than horizontal movement system even they were made up with same components. The vertical movement system needs to be studied differently from the horizontal movement system to develop its precision. The vertical movement system has load on its transfer direction and it makes the movement system weak in precision than the horizontal one. Some machines have mechanical counter balance, hydraulic or pneumatic counter balance to compensate the weight of the machine head. And there is several type of compensating the weight. It can push the machine head and also can use chain or wire lope to transfer the compensating force from counter balance to machine head. According to the type of compensating, there could be error from friction, pressure error of hydraulic or pressure control error. Also according to what to use for transferring the compensating force, transfer error of compensating force could be occur.

Keywords: Chain chordal action, counter balance, setup error, vertical positioning system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2104
365 Thermographic Tests of Curved GFRP Structures with Delaminations: Numerical Modelling vs. Experimental Validation

Authors: P. D. Pastuszak

Abstract:

The present work is devoted to thermographic studies of curved composite panels (unidirectional GFRP) with subsurface defects. Various artificial defects, created by inserting PTFE stripe between individual layers of a laminate during manufacturing stage are studied. The analysis is conducted both with the use finite element method and experiments. To simulate transient heat transfer in 3D model with embedded various defect sizes, the ANSYS package is used. Pulsed Thermography combined with optical excitation source provides good results for flat surfaces. Composite structures are mostly used in complex components, e.g., pipes, corners and stiffeners. Local decrease of mechanical properties in these regions can have significant influence on strength decrease of the entire structure. Application of active procedures of thermography to defect detection and evaluation in this type of elements seems to be more appropriate that other NDT techniques. Nevertheless, there are various uncertainties connected with correct interpretation of acquired data. In this paper, important factors concerning Infrared Thermography measurements of curved surfaces in the form of cylindrical panels are considered. In addition, temperature effects on the surface resulting from complex geometry and embedded and real defect are also presented.

Keywords: Active thermography, finite element analysis, composite, curved structures, defects.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1714
364 The Development of Smart School Condition Assessment Based on Condition Survey Protocol (CSP) 1 Matrix: A Literature Review

Authors: N. Hamzah, M. Mahli, A. I. Che-Ani, M. M Tahir, N. A. G. Abdullah, N. M Tawil

Abstract:

Building inspection is one of the key components of building maintenance. The primary purpose of performing a building inspection is to evaluate the building-s condition. Without inspection, it is difficult to determine a built asset-s current condition, so failure to inspect can contribute to the asset-s future failure. Traditionally, a longhand survey description has been widely used for property condition reports. Surveys that employ ratings instead of descriptions are gaining wide acceptance in the industry because they cater to the need for numerical analysis output. These kinds of surveys are also in keeping with the new RICS HomeBuyer Report 2009. In this paper, we propose a new assessment method, derived from the current rating systems, for assessing the specifically smart school building-s condition and rating the seriousness of each defect identified. These two assessment criteria are then multiplied to find the building-s score, which we called the Condition Survey Protocol (CSP) 1 Matrix. Instead of a longhand description of a building-s defects, this matrix requires concise explanations about the defects identified, thus saving on-site time during a smart school building inspection. The full score is used to give the building an overall rating: Good, Fair or Dilapidated.

Keywords: Assessment matrix, building condition survey, rating system, smart school and survey protocol.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2465
363 Addressing Oral Sensory Issues and Possible Remediation in Children with Autism Spectrum Disorders: Illustrated with a Case Study

Authors: A. K. Aswathy, Asha Manoharan, Arya Manoharan

Abstract:

The purpose of this study are to define the nature of oral sensory issues in children with autism spectrum disorder (ASD), identify important components of the assessment and treatment of this issues specific to this population, and delineate specific therapeutic techniques designed to improve assessment and treatment within therapeutic settings. Literature review and case example is used to define the predominant nature of the oral sensory issues that are experienced by some children on the autism spectrum. Characteristics of this complex disorder that can have an impact on feeding skill and behavior are also identified. These factors are then integrated to create assessment and intervention techniques that can be used in conjunction with traditional feeding approaches to facilitate improvements in eating as well as reducing oral apraxic component in this unique population. The complex nature of ASD and its many influences on feeding skills and behavior create the need for modification to both assessment and treatment approaches. Additional research is needed to create therapeutic protocols that can be used by speech-language pathologists to effectively assess and treat feeding and oro motor apraxic difficulties that are commonly encountered in children with ASD.

Keywords: Autism, feeding, intervention, oral sensory issues, oral apraxia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2881
362 RTCoord: A Methodology to Design WSAN Applications

Authors: J. Barbarán, M. Díaz, I. Esteve, D. Garrido, L. Llopis, B. Rubio

Abstract:

Wireless Sensor and Actor Networks (WSANs) constitute an emerging and pervasive technology that is attracting increasing interest in the research community for a wide range of applications. WSANs have two important requirements: coordination interactions and real-time communication to perform correct and timely actions. This paper introduces a methodology to facilitate the task of the application programmer focusing on the coordination and real-time requirements of WSANs. The methodology proposed in this model uses a real-time component model, UM-RTCOM, which will help us to achieve the design and implementation of applications in WSAN by using the component oriented paradigm. This will help us to develop software components which offer some very interesting features, such as reusability and adaptability which are very suitable for WSANs as they are very dynamic environments with rapidly changing conditions. In addition, a high-level coordination model based on tuple channels (TC-WSAN) is integrated into the methodology by providing a component-based specification of this model in UM-RTCOM; this will allow us to satisfy both sensor-actor and actor-actor coordination requirements in WSANs. Finally, we present in this paper the design and implementation of an application which will help us to show how the methodology can be easily used in order to achieve the development of WSANs applications.

Keywords: Sensor networks, real time and embedded systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1303
361 Learning to Recognize Faces by Local Feature Design and Selection

Authors: Yanwei Pang, Lei Zhang, Zhengkai Liu

Abstract:

Studies in neuroscience suggest that both global and local feature information are crucial for perception and recognition of faces. It is widely believed that local feature is less sensitive to variations caused by illumination, expression and illumination. In this paper, we target at designing and learning local features for face recognition. We designed three types of local features. They are semi-global feature, local patch feature and tangent shape feature. The designing of semi-global feature aims at taking advantage of global-like feature and meanwhile avoiding suppressing AdaBoost algorithm in boosting weak classifies established from small local patches. The designing of local patch feature targets at automatically selecting discriminative features, and is thus different with traditional ways, in which local patches are usually selected manually to cover the salient facial components. Also, shape feature is considered in this paper for frontal view face recognition. These features are selected and combined under the framework of boosting algorithm and cascade structure. The experimental results demonstrate that the proposed approach outperforms the standard eigenface method and Bayesian method. Moreover, the selected local features and observations in the experiments are enlightening to researches in local feature design in face recognition.

Keywords: Face recognition, local feature, AdaBoost, subspace analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1599
360 Reversible, Embedded and Highly Scalable Image Compression System

Authors: Federico Pérez González, Iñaki Goirizelaia Ordorika, Pedro Iriondo Bengoa

Abstract:

In this work a new method for low complexity image coding is presented, that permits different settings and great scalability in the generation of the final bit stream. This coding presents a continuous-tone still image compression system that groups loss and lossless compression making use of finite arithmetic reversible transforms. Both transformation in the space of color and wavelet transformation are reversible. The transformed coefficients are coded by means of a coding system in depending on a subdivision into smaller components (CFDS) similar to the bit importance codification. The subcomponents so obtained are reordered by means of a highly configure alignment system depending on the application that makes possible the re-configure of the elements of the image and obtaining different importance levels from which the bit stream will be generated. The subcomponents of each importance level are coded using a variable length entropy coding system (VBLm) that permits the generation of an embedded bit stream. This bit stream supposes itself a bit stream that codes a compressed still image. However, the use of a packing system on the bit stream after the VBLm allows the realization of a final highly scalable bit stream from a basic image level and one or several improvement levels.

Keywords: Image compression, wavelet transform, highly scalable, reversible transform, embedded, subcomponents.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1304