Search results for: parametric function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2471

Search results for: parametric function

281 Significance of Bike-Frame Geometric Factors for Cycling Efficiency and Muscle Activation

Authors: Luen Chow Chan

Abstract:

With the advocacy of green transportation and green traveling, cycling has become increasingly popular nowadays. Physiology and bike design are key factors for the influence of cycling efficiency. Therefore, this study aimed to investigate the significance of bike-frame geometric factors on cycling efficiency and muscle activation for different body sizes of non-professional Asian male cyclists. Participants who represented various body sizes, as measured by leg and back lengths, carried out cycling tests using a tailor-assembled road bike with different ergonomic design configurations including seat-height adjustments (i.e., 96%, 100%, and 104% of trochanteric height) and bike frame sizes (i.e., small and medium frames) for an assessable distance of 1 km. A specific power meter and self-developed adaptable surface electromyography (sEMG) were used to measure average pedaling power and cadence generated and muscle activation, respectively. The results showed that changing the seat height was far more significant than the body and bike frame sizes. The sEMG data evidently provided a better understanding of muscle activation as a function of different seat heights. Therefore, the interpretation of this study is that the major bike ergonomic design factor dominating the cycling efficiency of Asian participants with different body sizes was the seat height.

Keywords: Bike frame sizes, cadence rate, pedaling power, seat height.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 922
280 A Review on Medical Image Registration Techniques

Authors: Shadrack Mambo, Karim Djouani, Yskandar Hamam, Barend van Wyk, Patrick Siarry

Abstract:

This paper discusses the current trends in medical image registration techniques and addresses the need to provide a solid theoretical foundation for research endeavours. Methodological analysis and synthesis of quality literature was done, providing a platform for developing a good foundation for research study in this field which is crucial in understanding the existing levels of knowledge. Research on medical image registration techniques assists clinical and medical practitioners in diagnosis of tumours and lesion in anatomical organs, thereby enhancing fast and accurate curative treatment of patients. Literature review aims to provide a solid theoretical foundation for research endeavours in image registration techniques. Developing a solid foundation for a research study is possible through a methodological analysis and synthesis of existing contributions. Out of these considerations, the aim of this paper is to enhance the scientific community’s understanding of the current status of research in medical image registration techniques and also communicate to them, the contribution of this research in the field of image processing. The gaps identified in current techniques can be closed by use of artificial neural networks that form learning systems designed to minimise error function. The paper also suggests several areas of future research in the image registration.

Keywords: Image registration techniques, medical images, neural networks, optimisation, transformation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1812
279 Obtaining High-Dimensional Configuration Space for Robotic Systems Operating in a Common Environment

Authors: U. Yerlikaya, R. T. Balkan

Abstract:

In this research, a method is developed to obtain high-dimensional configuration space for path planning problems. In typical cases, the path planning problems are solved directly in the 3-dimensional (D) workspace. However, this method is inefficient in handling the robots with various geometrical and mechanical restrictions. To overcome these difficulties, path planning may be formalized and solved in a new space which is called configuration space. The number of dimensions of the configuration space comes from the degree of freedoms of the system of interest. The method can be applied in two ways. In the first way, the point clouds of all the bodies of the system and interaction of them are used. The second way is performed via using the clearance function of simulation software where the minimum distances between surfaces of bodies are simultaneously measured. A double-turret system is held in the scope of this study. The 4-D configuration space of a double-turret system is obtained in these two ways. As a result, the difference between these two methods is around 1%, depending on the density of the point cloud. The disparity between the two forms steadily decreases as the point cloud density increases. At the end of the study, in order to verify 4-D configuration space obtained, 4-D path planning problem was realized as 2-D + 2-D and a sample path planning is carried out with using A* algorithm. Then, the accuracy of the configuration space is proved using the obtained paths on the simulation model of the double-turret system.

Keywords: A* Algorithm, autonomous turrets, high-dimensional C-Space, manifold C-Space, point clouds.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 386
278 Indirect Solar Desalination: Value Engineering and Cost Benefit Analysis

Authors: Grace Rachid, Mutasem El-Fadel, Mahmoud Al-Hindi, Ibrahim Jamali, Daniel Abdel Nour

Abstract:

This study examines the feasibility of indirect solar desalination in oil producing countries in the Middle East and North Africa (MENA) region. It relies on value engineering (VE) and costbenefit with sensitivity analyses to identify optimal coupling configurations of desalination and solar energy technologies. A comparative return on investment was assessed as a function of water costs for varied plant capacities (25,000 to 75,000 m3/day), project lifetimes (15 to 25 years), and discount rates (5 to 15%) taking into consideration water and energy subsidies, land cost as well as environmental externalities in the form of carbon credit related to greenhouse gas (GHG) emissions reduction. The results showed reverse osmosis (RO) coupled with photovoltaic technologies (PVs) as the most promising configuration, robust across different prices for Brent oil, discount rates, as well as different project lifetimes. Environmental externalities and subsidies analysis revealed that a 16% reduction in existing subsidy on water tariffs would ensure economic viability. Additionally, while land costs affect investment attractiveness, the viability of RO coupled with PV remains possible for a land purchase cost <$ 80/m2 or a lease rate <$1/m2/yr. Beyond those rates, further subsidy lifting is required.

Keywords: Solar energy, desalination, value engineering, CBA, carbon credit, subsidies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2594
277 An AI-Based Dynamical Resource Allocation Calculation Algorithm for Unmanned Aerial Vehicle

Authors: Zhou Luchen, Wu Yubing, Burra Venkata Durga Kumar

Abstract:

As the scale of the network becomes larger and more complex than before, the density of user devices is also increasing. The development of Unmanned Aerial Vehicle (UAV) networks is able to collect and transform data in an efficient way by using software-defined networks (SDN) technology. This paper proposed a three-layer distributed and dynamic cluster architecture to manage UAVs by using an AI-based resource allocation calculation algorithm to address the overloading network problem. Through separating services of each UAV, the UAV hierarchical cluster system performs the main function of reducing the network load and transferring user requests, with three sub-tasks including data collection, communication channel organization, and data relaying. In this cluster, a head node and a vice head node UAV are selected considering the CPU, RAM, and ROM memory of devices, battery charge, and capacity. The vice head node acts as a backup that stores all the data in the head node. The k-means clustering algorithm is used in order to detect high load regions and form the UAV layered clusters. The whole process of detecting high load areas, forming and selecting UAV clusters, and moving the selected UAV cluster to that area is proposed as offloading traffic algorithm.

Keywords: k-means, resource allocation, SDN, UAV network, unmanned aerial vehicles.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 351
276 Integrating Fast Karnough Map and Modular Neural Networks for Simplification and Realization of Complex Boolean Functions

Authors: Hazem M. El-Bakry

Abstract:

In this paper a new fast simplification method is presented. Such method realizes Karnough map with large number of variables. In order to accelerate the operation of the proposed method, a new approach for fast detection of group of ones is presented. Such approach implemented in the frequency domain. The search operation relies on performing cross correlation in the frequency domain rather than time one. It is proved mathematically and practically that the number of computation steps required for the presented method is less than that needed by conventional cross correlation. Simulation results using MATLAB confirm the theoretical computations. Furthermore, a powerful solution for realization of complex functions is given. The simplified functions are implemented by using a new desigen for neural networks. Neural networks are used because they are fault tolerance and as a result they can recognize signals even with noise or distortion. This is very useful for logic functions used in data and computer communications. Moreover, the implemented functions are realized with minimum amount of components. This is done by using modular neural nets (MNNs) that divide the input space into several homogenous regions. Such approach is applied to implement XOR function, 16 logic functions on one bit level, and 2-bit digital multiplier. Compared to previous non- modular designs, a clear reduction in the order of computations and hardware requirements is achieved.

Keywords: Boolean Functions, Simplification, KarnoughMap, Implementation of Logic Functions, Modular NeuralNetworks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1814
275 Optimization of Reaction Rate Parameters in Modeling of Heavy Paraffins Dehydrogenation

Authors: Leila Vafajoo, Farhad Khorasheh, Mehrnoosh Hamzezadeh Nakhjavani, Moslem Fattahi

Abstract:

In the present study, a procedure was developed to determine the optimum reaction rate constants in generalized Arrhenius form and optimized through the Nelder-Mead method. For this purpose, a comprehensive mathematical model of a fixed bed reactor for dehydrogenation of heavy paraffins over Pt–Sn/Al2O3 catalyst was developed. Utilizing appropriate kinetic rate expressions for the main dehydrogenation reaction as well as side reactions and catalyst deactivation, a detailed model for the radial flow reactor was obtained. The reactor model composed of a set of partial differential equations (PDE), ordinary differential equations (ODE) as well as algebraic equations all of which were solved numerically to determine variations in components- concentrations in term of mole percents as a function of time and reactor radius. It was demonstrated that most significant variations observed at the entrance of the bed and the initial olefin production obtained was rather high. The aforementioned method utilized a direct-search optimization algorithm along with the numerical solution of the governing differential equations. The usefulness and validity of the method was demonstrated by comparing the predicted values of the kinetic constants using the proposed method with a series of experimental values reported in the literature for different systems.

Keywords: Dehydrogenation, Pt-Sn/Al2O3 Catalyst, Modeling, Nelder-Mead, Optimization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2745
274 Modelling Hydrological Time Series Using Wakeby Distribution

Authors: Ilaria Lucrezia Amerise

Abstract:

The statistical modelling of precipitation data for a given portion of territory is fundamental for the monitoring of climatic conditions and for Hydrogeological Management Plans (HMP). This modelling is rendered particularly complex by the changes taking place in the frequency and intensity of precipitation, presumably to be attributed to the global climate change. This paper applies the Wakeby distribution (with 5 parameters) as a theoretical reference model. The number and the quality of the parameters indicate that this distribution may be the appropriate choice for the interpolations of the hydrological variables and, moreover, the Wakeby is particularly suitable for describing phenomena producing heavy tails. The proposed estimation methods for determining the value of the Wakeby parameters are the same as those used for density functions with heavy tails. The commonly used procedure is the classic method of moments weighed with probabilities (probability weighted moments, PWM) although this has often shown difficulty of convergence, or rather, convergence to a configuration of inappropriate parameters. In this paper, we analyze the problem of the likelihood estimation of a random variable expressed through its quantile function. The method of maximum likelihood, in this case, is more demanding than in the situations of more usual estimation. The reasons for this lie, in the sampling and asymptotic properties of the estimators of maximum likelihood which improve the estimates obtained with indications of their variability and, therefore, their accuracy and reliability. These features are highly appreciated in contexts where poor decisions, attributable to an inefficient or incomplete information base, can cause serious damages.

Keywords: Generalized extreme values (GEV), likelihood estimation, precipitation data, Wakeby distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 674
273 Integrating Fast Karnough Map and Modular Neural Networks for Simplification and Realization of Complex Boolean Functions

Authors: Hazem M. El-Bakry

Abstract:

In this paper a new fast simplification method is presented. Such method realizes Karnough map with large number of variables. In order to accelerate the operation of the proposed method, a new approach for fast detection of group of ones is presented. Such approach implemented in the frequency domain. The search operation relies on performing cross correlation in the frequency domain rather than time one. It is proved mathematically and practically that the number of computation steps required for the presented method is less than that needed by conventional cross correlation. Simulation results using MATLAB confirm the theoretical computations. Furthermore, a powerful solution for realization of complex functions is given. The simplified functions are implemented by using a new desigen for neural networks. Neural networks are used because they are fault tolerance and as a result they can recognize signals even with noise or distortion. This is very useful for logic functions used in data and computer communications. Moreover, the implemented functions are realized with minimum amount of components. This is done by using modular neural nets (MNNs) that divide the input space into several homogenous regions. Such approach is applied to implement XOR function, 16 logic functions on one bit level, and 2-bit digital multiplier. Compared to previous non- modular designs, a clear reduction in the order of computations and hardware requirements is achieved.

Keywords: Boolean functions, simplification, Karnough map, implementation of logic functions, modular neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2070
272 A Proposed Hybrid Color Image Compression Based on Fractal Coding with Quadtree and Discrete Cosine Transform

Authors: Shimal Das, Dibyendu Ghoshal

Abstract:

Fractal based digital image compression is a specific technique in the field of color image. The method is best suited for irregular shape of image like snow bobs, clouds, flame of fire; tree leaves images, depending on the fact that parts of an image often resemble with other parts of the same image. This technique has drawn much attention in recent years because of very high compression ratio that can be achieved. Hybrid scheme incorporating fractal compression and speedup techniques have achieved high compression ratio compared to pure fractal compression. Fractal image compression is a lossy compression method in which selfsimilarity nature of an image is used. This technique provides high compression ratio, less encoding time and fart decoding process. In this paper, fractal compression with quad tree and DCT is proposed to compress the color image. The proposed hybrid schemes require four phases to compress the color image. First: the image is segmented and Discrete Cosine Transform is applied to each block of the segmented image. Second: the block values are scanned in a zigzag manner to prevent zero co-efficient. Third: the resulting image is partitioned as fractals by quadtree approach. Fourth: the image is compressed using Run length encoding technique.

Keywords: Fractal coding, Discrete Cosine Transform, Iterated Function System (IFS), Affine Transformation, Run length encoding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1570
271 Tele-Operated Anthropomorphic Arm and Hand Design

Authors: Namal A. Senanayake, Khoo B. How, Quah W. Wai

Abstract:

In this project, a tele-operated anthropomorphic robotic arm and hand is designed and built as a versatile robotic arm system. The robot has the ability to manipulate objects such as pick and place operations. It is also able to function by itself, in standalone mode. Firstly, the robotic arm is built in order to interface with a personal computer via a serial servo controller circuit board. The circuit board enables user to completely control the robotic arm and moreover, enables feedbacks from user. The control circuit board uses a powerful integrated microcontroller, a PIC (Programmable Interface Controller). The PIC is firstly programmed using BASIC (Beginner-s All-purpose Symbolic Instruction Code) and it is used as the 'brain' of the robot. In addition a user friendly Graphical User Interface (GUI) is developed as the serial servo interface software using Microsoft-s Visual Basic 6. The second part of the project is to use speech recognition control on the robotic arm. A speech recognition circuit board is constructed with onboard components such as PIC and other integrated circuits. It replaces the computers- Graphical User Interface. The robotic arm is able to receive instructions as spoken commands through a microphone and perform operations with respect to the commands such as picking and placing operations.

Keywords: Tele-operated Anthropomorphic Robotic Arm and Hand, Robot Motion System, Serial Servo Controller, Speech Recognition Controller.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1766
270 Comparative Performance of Artificial Bee Colony Based Algorithms for Wind-Thermal Unit Commitment

Authors: P. K. Singhal, R. Naresh, V. Sharma

Abstract:

This paper presents the three optimization models, namely New Binary Artificial Bee Colony (NBABC) algorithm, NBABC with Local Search (NBABC-LS), and NBABC with Genetic Crossover (NBABC-GC) for solving the Wind-Thermal Unit Commitment (WTUC) problem. The uncertain nature of the wind power is incorporated using the Weibull probability density function, which is used to calculate the overestimation and underestimation costs associated with the wind power fluctuation. The NBABC algorithm utilizes a mechanism based on the dissimilarity measure between binary strings for generating the binary solutions in WTUC problem. In NBABC algorithm, an intelligent scout bee phase is proposed that replaces the abandoned solution with the global best solution. The local search operator exploits the neighboring region of the current solutions, whereas the integration of genetic crossover with the NBABC algorithm increases the diversity in the search space and thus avoids the problem of local trappings encountered with the NBABC algorithm. These models are then used to decide the units on/off status, whereas the lambda iteration method is used to dispatch the hourly load demand among the committed units. The effectiveness of the proposed models is validated on an IEEE 10-unit thermal system combined with a wind farm over the planning period of 24 hours.

Keywords: Artificial bee colony algorithm, economic dispatch, unit commitment, wind power.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1076
269 Comparative Performance of Artificial Bee Colony Based Algorithms for Wind-Thermal Unit Commitment

Authors: P. K. Singhal, R. Naresh, V. Sharma

Abstract:

This paper presents the three optimization models, namely New Binary Artificial Bee Colony (NBABC) algorithm, NBABC with Local Search (NBABC-LS), and NBABC with Genetic Crossover (NBABC-GC) for solving the Wind-Thermal Unit Commitment (WTUC) problem. The uncertain nature of the wind power is incorporated using the Weibull probability density function, which is used to calculate the overestimation and underestimation costs associated with the wind power fluctuation. The NBABC algorithm utilizes a mechanism based on the dissimilarity measure between binary strings for generating the binary solutions in WTUC problem. In NBABC algorithm, an intelligent scout bee phase is proposed that replaces the abandoned solution with the global best solution. The local search operator exploits the neighboring region of the current solutions, whereas the integration of genetic crossover with the NBABC algorithm increases the diversity in the search space and thus avoids the problem of local trappings encountered with the NBABC algorithm. These models are then used to decide the units on/off status, whereas the lambda iteration method is used to dispatch the hourly load demand among the committed units. The effectiveness of the proposed models is validated on an IEEE 10-unit thermal system combined with a wind farm over the planning period of 24 hours.

Keywords: Artificial bee colony algorithm, economic dispatch, unit commitment, wind power.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1182
268 Response Delay Model: Bridging the Gap in Urban Fire Disaster Response System

Authors: Sulaiman Yunus

Abstract:

The need for modeling response to urban fire disaster cannot be over emphasized, as recurrent fire outbreaks have gutted most cities of the world. This necessitated the need for a prompt and efficient response system in order to mitigate the impact of the disaster. Promptness, as a function of time, is seen to be the fundamental determinant for efficiency of a response system and magnitude of a fire disaster. Delay, as a result of several factors, is one of the major determinants of promptgness of a response system and also the magnitude of a fire disaster. Response Delay Model (RDM) intends to bridge the gap in urban fire disaster response system through incorporating and synchronizing the delay moments in measuring the overall efficiency of a response system and determining the magnitude of a fire disaster. The model identified two delay moments (pre-notification and Intra-reflex sequence delay) that can be elastic and collectively plays a significant role in influencing the efficiency of a response system. Due to variation in the elasticity of the delay moments, the model provides for measuring the length of delays in order to arrive at a standard average delay moment for different parts of the world, putting into consideration geographic location, level of preparedness and awareness, technological advancement, socio-economic and environmental factors. It is recommended that participatory researches should be embarked on locally and globally to determine standard average delay moments within each phase of the system so as to enable determining the efficiency of response systems and predicting fire disaster magnitudes.

Keywords: Delay moment, fire disaster, reflex sequence, response, response delay moment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 733
267 Buckling of Plates on Foundation with Different Types of Sides Support

Authors: Ali N. Suri, Ahmad A. Al-Makhlufi

Abstract:

In this paper the problem of buckling of plates on foundation of finite length and with different side support is studied.

The Finite Strip Method is used as tool for the analysis. This method uses finite strip elastic, foundation, and geometric matrices to build the assembly matrices for the whole structure, then after introducing boundary conditions at supports, the resulting reduced matrices is transformed into a standard Eigenvalue-Eigenvector problem. The solution of this problem will enable the determination of the buckling load, the associated buckling modes and the buckling wave length.

To carry out the buckling analysis starting from the elastic, foundation, and geometric stiffness matrices for each strip a computer program FORTRAN list is developed.

Since stiffness matrices are function of wave length of buckling, the computer program used an iteration procedure to find the critical buckling stress for each value of foundation modulus and for each boundary condition.

The results showed the use of elastic medium to support plates subject to axial load increase a great deal the buckling load, the results found are very close with those obtained by other analytical methods and experimental work.

The results also showed that foundation compensates the effect of the weakness of some types of constraint of side support and maximum benefit found for plate with one side simply supported the other free.

Keywords: Buckling, Finite Strip, Different Sides Support, Plates on Foundation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2147
266 Simulation Study of Asphaltene Deposition and Solubility of CO2 in the Brine during Cyclic CO2 Injection Process in Unconventional Tight Reservoirs

Authors: Rashid S. Mohammad, Shicheng Zhang, Sun Lu, Syed Jamal-Ud-Din, Xinzhe Zhao

Abstract:

A compositional reservoir simulation model (CMG-GEM) was used for cyclic CO2 injection process in unconventional tight reservoir. Cyclic CO2 injection is an enhanced oil recovery process consisting of injection, shut-in, and production. The study of cyclic CO2 injection and hydrocarbon recovery in ultra-low permeability reservoirs is mainly a function of rock, fluid, and operational parameters. CMG-GEM was used to study several design parameters of cyclic CO2 injection process to distinguish the parameters with maximum effect on the oil recovery and to comprehend the behavior of cyclic CO2 injection in tight reservoir. On the other hand, permeability reduction induced by asphaltene precipitation is one of the major issues in the oil industry due to its plugging onto the porous media which reduces the oil productivity. In addition to asphaltene deposition, solubility of CO2 in the aquifer is one of the safest and permanent trapping techniques when considering CO2 storage mechanisms in geological formations. However, the effects of the above uncertain parameters on the process of CO2 enhanced oil recovery have not been understood systematically. Hence, it is absolutely necessary to study the most significant parameters which dominate the process. The main objective of this study is to improve techniques for designing cyclic CO2 injection process while considering the effects of asphaltene deposition and solubility of CO2 in the brine in order to prevent asphaltene precipitation, minimize CO2 emission, optimize cyclic CO2 injection, and maximize oil production.

Keywords: Tight reservoirs, cyclic O2 injection, asphaltene, solubility, reservoir simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1819
265 Location Update Cost Analysis of Mobile IPv6 Protocols

Authors: Brahmjit Singh

Abstract:

Mobile IP has been developed to provide the continuous information network access to mobile users. In IP-based mobile networks, location management is an important component of mobility management. This management enables the system to track the location of mobile node between consecutive communications. It includes two important tasks- location update and call delivery. Location update is associated with signaling load. Frequent updates lead to degradation in the overall performance of the network and the underutilization of the resources. It is, therefore, required to devise the mechanism to minimize the update rate. Mobile IPv6 (MIPv6) and Hierarchical MIPv6 (HMIPv6) have been the potential candidates for deployments in mobile IP networks for mobility management. HMIPv6 through studies has been shown with better performance as compared to MIPv6. It reduces the signaling overhead traffic by making registration process local. In this paper, we present performance analysis of MIPv6 and HMIPv6 using an analytical model. Location update cost function is formulated based on fluid flow mobility model. The impact of cell residence time, cell residence probability and user-s mobility is investigated. Numerical results are obtained and presented in graphical form. It is shown that HMIPv6 outperforms MIPv6 for high mobility users only and for low mobility users; performance of both the schemes is almost equivalent to each other.

Keywords: Wireless networks, Mobile IP networks, Mobility management, performance analysis, Handover.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1754
264 The Impact of the Cell-Free Solution of Lactic Acid Bacteria on Cadaverine Production by Listeria monocytogenes and Staphylococcus aureus in Lysine-Decarboxylase Broth

Authors: Fatih Özogul, Nurten Toy, Yesim Özogul

Abstract:

The influences of cell-free solutions (CFSs) of lactic acid bacteria (LAB) on cadaverine and other biogenic amines production by Listeria monocytogenes and Staphylococcus aureus were investigated in lysine decarboxylase broth (LDB) using HPLC. Cell free solutions were prepared from Lactococcus lactis subsp. lactis, Leuconostoc mesenteroides subsp. cremoris, Pediococcus acidilactici and Streptococcus thermophiles. Two different concentrations that were 50% and 25% CFS and the control without CFSs were prepared. Significant variations on biogenic amine production were observed in the presence of L. monocytogenes and S. aureus (P < 0.05). The function of CFS on biogenic amine production by foodborne pathogens varied depending on strains and specific amine. Cadaverine formation by L. monocytogenes and S. aureus in control were 500.9 and 948.1 mg/L, respectively while the CFSs of LAB induced 4-fold lower cadaverine production by L. monocytogenes and 7-fold lower cadaverine production by S. aureus. The CFSs resulted in strong decreases in cadaverine and putrescine production by L. monocytogenes and S. aureus, although remarkable increases were observed for histamine, spermidine, spermine, serotonin, dopamine, tyramine and agmatine in the presence of LAB in lysine decarboxylase broth.

Keywords: Cell-free solution, lactic acid bacteria, cadaverine, food borne-pathogen.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2123
263 Modeling Aeration of Sharp Crested Weirs by Using Support Vector Machines

Authors: Arun Goel

Abstract:

The present paper attempts to investigate the prediction of air entrainment rate and aeration efficiency of a free overfall jets issuing from a triangular sharp crested weir by using regression based modelling. The empirical equations, Support vector machine (polynomial and radial basis function) models and the linear regression techniques were applied on the triangular sharp crested weirs relating the air entrainment rate and the aeration efficiency to the input parameters namely drop height, discharge, and vertex angle. It was observed that there exists a good agreement between the measured values and the values obtained using empirical equations, Support vector machine (Polynomial and rbf) models and the linear regression techniques. The test results demonstrated that the SVM based (Poly & rbf) model also provided acceptable prediction of the measured values with reasonable accuracy along with empirical equations and linear regression techniques in modelling the air entrainment rate and the aeration efficiency of a free overfall jets issuing from triangular sharp crested weir. Further sensitivity analysis has also been performed to study the impact of input parameter on the output in terms of air entrainment rate and aeration efficiency.

Keywords: Air entrainment rate, dissolved oxygen, regression, SVM, weir.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1956
262 An Axiomatic Model for Development of the Allocated Architecture in Systems Engineering Process

Authors: A. Sharahi, R. Tehrani, A. Mollajan

Abstract:

The final step to complete the “Analytical Systems Engineering Process” is the “Allocated Architecture” in which all Functional Requirements (FRs) of an engineering system must be allocated into their corresponding Physical Components (PCs). At this step, any design for developing the system’s allocated architecture in which no clear pattern of assigning the exclusive “responsibility” of each PC for fulfilling the allocated FR(s) can be found is considered a poor design that may cause difficulties in determining the specific PC(s) which has (have) failed to satisfy a given FR successfully. The present study utilizes the Axiomatic Design method principles to mathematically address this problem and establishes an “Axiomatic Model” as a solution for reaching good alternatives for developing the allocated architecture. This study proposes a “loss Function”, as a quantitative criterion to monetarily compare non-ideal designs for developing the allocated architecture and choose the one which imposes relatively lower cost to the system’s stakeholders. For the case-study, we use the existing design of U. S. electricity marketing subsystem, based on data provided by the U.S. Energy Information Administration (EIA). The result for 2012 shows the symptoms of a poor design and ineffectiveness due to coupling among the FRs of this subsystem.

Keywords: Allocated Architecture, Analytical Systems Engineering Process, Functional Requirements (FRs), Physical Components (PCs), Responsibility of a Physical Component, System’s Stakeholders.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1972
261 Hiding Data in Images Using PCP

Authors: Souvik Bhattacharyya, Gautam Sanyal

Abstract:

In recent years, everything is trending toward digitalization and with the rapid development of the Internet technologies, digital media needs to be transmitted conveniently over the network. Attacks, misuse or unauthorized access of information is of great concern today which makes the protection of documents through digital media a priority problem. This urges us to devise new data hiding techniques to protect and secure the data of vital significance. In this respect, steganography often comes to the fore as a tool for hiding information. Steganography is a process that involves hiding a message in an appropriate carrier like image or audio. It is of Greek origin and means "covered or hidden writing". The goal of steganography is covert communication. Here the carrier can be sent to a receiver without any one except the authenticated receiver only knows existence of the information. Considerable amount of work has been carried out by different researchers on steganography. In this work the authors propose a novel Steganographic method for hiding information within the spatial domain of the gray scale image. The proposed approach works by selecting the embedding pixels using some mathematical function and then finds the 8 neighborhood of the each selected pixel and map each bit of the secret message in each of the neighbor pixel coordinate position in a specified manner. Before embedding a checking has been done to find out whether the selected pixel or its neighbor lies at the boundary of the image or not. This solution is independent of the nature of the data to be hidden and produces a stego image with minimum degradation.

Keywords: Cover Image, LSB, Pixel Coordinate Position (PCP), Stego Image.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1821
260 Iterative Estimator-Based Nonlinear Backstepping Control of a Robotic Exoskeleton

Authors: Brahmi Brahim, Mohammad Habibur Rahman, Maarouf Saad, Cristóbal Ochoa Luna

Abstract:

A repetitive training movement is an efficient method to improve the ability and movement performance of stroke survivors and help them to recover their lost motor function and acquire new skills. The ETS-MARSE is seven degrees of freedom (DOF) exoskeleton robot developed to be worn on the lateral side of the right upper-extremity to assist and rehabilitate the patients with upper-extremity dysfunction resulting from stroke. Practically, rehabilitation activities are repetitive tasks, which make the assistive/robotic systems to suffer from repetitive/periodic uncertainties and external perturbations induced by the high-order dynamic model (seven DOF) and interaction with human muscle which impact on the tracking performance and even on the stability of the exoskeleton. To ensure the robustness and the stability of the robot, a new nonlinear backstepping control was implemented with designed tests performed by healthy subjects. In order to limit and to reject the periodic/repetitive disturbances, an iterative estimator was integrated into the control of the system. The estimator does not need the precise dynamic model of the exoskeleton. Experimental results confirm the robustness and accuracy of the controller performance to deal with the external perturbation, and the effectiveness of the iterative estimator to reject the repetitive/periodic disturbances.

Keywords: Backstepping control, iterative control, rehabilitation, ETS-MARSE.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1369
259 Application of Mapping and Superimposing Rule for Solution of Parabolic PDE in Porous Medium under Cyclic Loading

Authors: Mohammad M. Toufigh, Ahad Ouria

Abstract:

This paper presents an analytical method to solve governing consolidation parabolic partial differential equation (PDE) for inelastic porous Medium (soil) with consideration of variation of equation coefficient under cyclic loading. Since under cyclic loads, soil skeleton parameters change, this would introduce variable coefficient of parabolic PDE. Classical theory would not rationalize consolidation phenomenon in such condition. In this research, a method based on time space mapping to a virtual time space along with superimposing rule is employed to solve consolidation of inelastic soils in cyclic condition. Changes of consolidation coefficient applied in solution by modification of loading and unloading duration by introducing virtual time. Mapping function is calculated based on consolidation partial differential equation results. Based on superimposing rule a set of continuous static loads in specified times used instead of cyclic load. A set of laboratory consolidation tests under cyclic load along with numerical calculations were performed in order to verify the presented method. Numerical solution and laboratory tests results showed accuracy of presented method.

Keywords: Mapping, Consolidation, Inelastic porous medium, Cyclic loading, Superimposing rule.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1778
258 Target Detection using Adaptive Progressive Thresholding Based Shifted Phase-Encoded Fringe-Adjusted Joint Transform Correlator

Authors: Inder K. Purohit, M. Nazrul Islam, K. Vijayan Asari, Mohammad A. Karim

Abstract:

A new target detection technique is presented in this paper for the identification of small boats in coastal surveillance. The proposed technique employs an adaptive progressive thresholding (APT) scheme to first process the given input scene to separate any objects present in the scene from the background. The preprocessing step results in an image having only the foreground objects, such as boats, trees and other cluttered regions, and hence reduces the search region for the correlation step significantly. The processed image is then fed to the shifted phase-encoded fringe-adjusted joint transform correlator (SPFJTC) technique which produces single and delta-like correlation peak for a potential target present in the input scene. A post-processing step involves using a peak-to-clutter ratio (PCR) to determine whether the boat in the input scene is authorized or unauthorized. Simulation results are presented to show that the proposed technique can successfully determine the presence of an authorized boat and identify any intruding boat present in the given input scene.

Keywords: Adaptive progressive thresholding, fringe adjusted filters, image segmentation, joint transform correlation, synthetic discriminant function

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1208
257 Neural Network Implementation Using FPGA: Issues and Application

Authors: A. Muthuramalingam, S. Himavathi, E. Srinivasan

Abstract:

.Hardware realization of a Neural Network (NN), to a large extent depends on the efficient implementation of a single neuron. FPGA-based reconfigurable computing architectures are suitable for hardware implementation of neural networks. FPGA realization of ANNs with a large number of neurons is still a challenging task. This paper discusses the issues involved in implementation of a multi-input neuron with linear/nonlinear excitation functions using FPGA. Implementation method with resource/speed tradeoff is proposed to handle signed decimal numbers. The VHDL coding developed is tested using Xilinx XC V50hq240 Chip. To improve the speed of operation a lookup table method is used. The problems involved in using a lookup table (LUT) for a nonlinear function is discussed. The percentage saving in resource and the improvement in speed with an LUT for a neuron is reported. An attempt is also made to derive a generalized formula for a multi-input neuron that facilitates to estimate approximately the total resource requirement and speed achievable for a given multilayer neural network. This facilitates the designer to choose the FPGA capacity for a given application. Using the proposed method of implementation a neural network based application, namely, a Space vector modulator for a vector-controlled drive is presented

Keywords: FPGA implementation, multi-input neuron, neural network, nn based space vector modulator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4424
256 Seismic Assessment of an Existing Dual System RC Buildings in Madinah City

Authors: Tarek M. Alguhane, Ayman H. Khalil, M. N. Fayed, Ayman M. Ismail

Abstract:

A 15-storey RC building, studied in this paper, is representative of modern building type constructed in Madina City in Saudi Arabia before 10 years ago. These buildings are almost consisting of reinforced concrete skeleton i.e. columns, beams and flat slab as well as shear walls in the stairs and elevator areas arranged in the way to have a resistance system for lateral loads (wind – earthquake loads). In this study, the dynamic properties of the 15-storey RC building were identified using ambient motions recorded at several, spatially-distributed locations within each building. Three dimensional pushover analysis (Nonlinear static analysis) was carried out using SAP2000 software incorporating inelastic material properties for concrete, infill and steel. The effect of modeling the building with and without infill walls, on the performance point as well as capacity and demand spectra due to EQ design spectrum function in Madina area has been investigated. ATC- 40 capacity and demand spectra are utilized to get the modification factor (R) for the studied building. The purpose of this analysis is to evaluate the expected performance of structural systems by estimating, strength and deformation demands in design, and comparing these demands to available capacities at the performance levels of interest. The results are summarized and discussed.

Keywords: Seismic assessment, pushover analysis, ambient vibration, modal update.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1902
255 Removal of Ni(II), Zn(II) and Pb(II) ions from Single Metal Aqueous Solution using Activated Carbon Prepared from Rice Husk

Authors: Mohd F. Taha, Chong F. Kiat, Maizatul S. Shaharun, Anita Ramli

Abstract:

The abundance and availability of rice husk, an agricultural waste, make them as a good source for precursor of activated carbon. In this work, rice husk-based activated carbons were prepared via base treated chemical activation process prior the carbonization process. The effect of carbonization temperatures (400, 600 and 800oC) on their pore structure was evaluated through morphology analysis using scanning electron microscope (SEM). Sample carbonized at 800oC showed better evolution and development of pores as compared to those carbonized at 400 and 600oC. The potential of rice husk-based activated carbon as an alternative adsorbent was investigated for the removal of Ni(II), Zn(II) and Pb(II) from single metal aqueous solution. The adsorption studies using rice husk-based activated carbon as an adsorbent were carried out as a function of contact time at room temperature and the metal ions were analyzed using atomic absorption spectrophotometer (AAS). The ability to remove metal ion from single metal aqueous solution was found to be improved with the increasing of carbonization temperature. Among the three metal ions tested, Pb(II) ion gave the highest adsorption on rice husk-based activated carbon. The results obtained indicate the potential to utilize rice husk as a promising precursor for the preparation of activated carbon for removal of heavy metals.

Keywords: Activated carbon, metal ion adsorption, rice husk, wastewater treatment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2717
254 Statistical Analysis for Overdispersed Medical Count Data

Authors: Y. N. Phang, E. F. Loh

Abstract:

Many researchers have suggested the use of zero inflated Poisson (ZIP) and zero inflated negative binomial (ZINB) models in modeling overdispersed medical count data with extra variations caused by extra zeros and unobserved heterogeneity. The studies indicate that ZIP and ZINB always provide better fit than using the normal Poisson and negative binomial models in modeling overdispersed medical count data. In this study, we proposed the use of Zero Inflated Inverse Trinomial (ZIIT), Zero Inflated Poisson Inverse Gaussian (ZIPIG) and zero inflated strict arcsine models in modeling overdispered medical count data. These proposed models are not widely used by many researchers especially in the medical field. The results show that these three suggested models can serve as alternative models in modeling overdispersed medical count data. This is supported by the application of these suggested models to a real life medical data set. Inverse trinomial, Poisson inverse Gaussian and strict arcsine are discrete distributions with cubic variance function of mean. Therefore, ZIIT, ZIPIG and ZISA are able to accommodate data with excess zeros and very heavy tailed. They are recommended to be used in modeling overdispersed medical count data when ZIP and ZINB are inadequate.

Keywords: Zero inflated, inverse trinomial distribution, Poisson inverse Gaussian distribution, strict arcsine distribution, Pearson’s goodness of fit.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3315
253 Identifying Autism Spectrum Disorder Using Optimization-Based Clustering

Authors: Sharifah Mousli, Sona Taheri, Jiayuan He

Abstract:

Autism spectrum disorder (ASD) is a complex developmental condition involving persistent difficulties with social communication, restricted interests, and repetitive behavior. The challenges associated with ASD can interfere with an affected individual’s ability to function in social, academic, and employment settings. Although there is no effective medication known to treat ASD, to our best knowledge, early intervention can significantly improve an affected individual’s overall development. Hence, an accurate diagnosis of ASD at an early phase is essential. The use of machine learning approaches improves and speeds up the diagnosis of ASD. In this paper, we focus on the application of unsupervised clustering methods in ASD, as a large volume of ASD data generated through hospitals, therapy centers, and mobile applications has no pre-existing labels. We conduct a comparative analysis using seven clustering approaches, such as K-means, agglomerative hierarchical, model-based, fuzzy-C-means, affinity propagation, self organizing maps, linear vector quantisation – as well as the recently developed optimization-based clustering (COMSEP-Clust) approach. We evaluate the performances of the clustering methods extensively on real-world ASD datasets encompassing different age groups: toddlers, children, adolescents, and adults. Our experimental results suggest that the COMSEP-Clust approach outperforms the other seven methods in recognizing ASD with well-separated clusters.

Keywords: Autism spectrum disorder, clustering, optimization, unsupervised machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 417
252 A Security Model of Voice Eavesdropping Protection over Digital Networks

Authors: Supachai Tangwongsan, Sathaporn Kassuvan

Abstract:

The purpose of this research is to develop a security model for voice eavesdropping protection over digital networks. The proposed model provides an encryption scheme and a personal secret key exchange between communicating parties, a so-called voice data transformation system, resulting in a real-privacy conversation. The operation of this system comprises two main steps as follows: The first one is the personal secret key exchange for using the keys in the data encryption process during conversation. The key owner could freely make his/her choice in key selection, so it is recommended that one should exchange a different key for a different conversational party, and record the key for each case into the memory provided in the client device. The next step is to set and record another personal option of encryption, either taking all frames or just partial frames, so-called the figure of 1:M. Using different personal secret keys and different sets of 1:M to different parties without the intervention of the service operator, would result in posing quite a big problem for any eavesdroppers who attempt to discover the key used during the conversation, especially in a short period of time. Thus, it is quite safe and effective to protect the case of voice eavesdropping. The results of the implementation indicate that the system can perform its function accurately as designed. In this regard, the proposed system is suitable for effective use in voice eavesdropping protection over digital networks, without any requirements to change presently existing network systems, mobile phone network and VoIP, for instance.

Keywords: Computer Security, Encryption, Key Exchange, Security Model, Voice Eavesdropping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1581