Search results for: Maximum likelihood detection (MLD).
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3151

Search results for: Maximum likelihood detection (MLD).

241 Factors Affecting Slot Machine Performance in an Electronic Gaming Machine Facility

Authors: Etienne Provencal, David L. St-Pierre

Abstract:

A facility exploiting only electronic gambling machines (EGMs) opened in 2007 in Quebec City, Canada under the name of Salons de Jeux du Québec (SdjQ). This facility is one of the first worldwide to rely on that business model. This paper models the performance of such EGMs. The interest from a managerial point of view is to identify the variables that can be controlled or influenced so that a comprehensive model can help improve the overall performance of the business. The EGM individual performance model contains eight different variables under study (Game Title, Progressive jackpot, Bonus Round, Minimum Coin-in, Maximum Coin-in, Denomination, Slant Top and Position). Using data from Quebec City’s SdjQ, a linear regression analysis explains 90.80% of the EGM performance. Moreover, results show a behavior slightly different than that of a casino. The addition of GameTitle as a factor to predict the EGM performance is one of the main contributions of this paper. The choice of the game (GameTitle) is very important. Games having better position do not have significantly better performance than games located elsewhere on the gaming floor. Progressive jackpots have a positive and significant effect on the individual performance of EGMs. The impact of BonusRound on the dependent variable is significant but negative. The effect of Denomination is significant but weakly negative. As expected, the Language of an EGMS does not impact its individual performance. This paper highlights some possible improvements by indicating which features are performing well. Recommendations are given to increase the performance of the EGMs performance.

Keywords: EGM, linear regression, model prediction, slot operations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1541
240 Investigations of Metals and Metal-Antibrowning Agents Effects on Polyphenol Oxidase Activity from Red Poppy Leaf

Authors: G. Arabaci

Abstract:

Heavy metals are one of the major groups of contaminants in the environment and many of them are toxic even at very low concentration in plants and animals. However, some metals play important roles in the biological function of many enzymes in living organisms. Metals such as zinc, iron, and cooper are important for survival and activity of enzymes in plants, however heavy metals can inhibit enzyme which is responsible for defense system of plants. Polyphenol oxidase (PPO) is a copper-containing metalloenzyme which is responsible for enzymatic browning reaction of plants. Enzymatic browning is a major problem for the handling of vegetables and fruits in food industry. It can be increased and effected with many different futures such as metals in the nature and ground. In the present work, PPO was isolated and characterized from green leaves of red poppy plant (Papaverr hoeas). Then, the effect of some known antibrowning agents which can form complexes with metals and metals were investigated on the red poppy PPO activity. The results showed that glutathione was the most potent inhibitory effect on PPO activity. Cu(II) and Fe(II) metals increased the enzyme activities however, Sn(II) had the maximum inhibitory effect and Zn(II) and Pb(II) had no significant effect on the enzyme activity. In order to reduce the effect of heavy metals, the effects of metal-antibrowning agent complexes on the PPO activity were determined. EDTA and metal complexes had no significant effect on the enzyme. L-ascorbic acid and metal complexes decreased but L-ascorbic acid-Cu(II)-complex had no effect. Glutathione–metal complexes had the best inhibitory effect on Red poppy leaf PPO activity.

Keywords: Inhibition, metal, red poppy, Polyphenol oxidase (PPO).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3446
239 A Growing Natural Gas Approach for Evaluating Quality of Software Modules

Authors: Parvinder S. Sandhu, Sandeep Khimta, Kiranpreet Kaur

Abstract:

The prediction of Software quality during development life cycle of software project helps the development organization to make efficient use of available resource to produce the product of highest quality. “Whether a module is faulty or not" approach can be used to predict quality of a software module. There are numbers of software quality prediction models described in the literature based upon genetic algorithms, artificial neural network and other data mining algorithms. One of the promising aspects for quality prediction is based on clustering techniques. Most quality prediction models that are based on clustering techniques make use of K-means, Mixture-of-Guassians, Self-Organizing Map, Neural Gas and fuzzy K-means algorithm for prediction. In all these techniques a predefined structure is required that is number of neurons or clusters should be known before we start clustering process. But in case of Growing Neural Gas there is no need of predetermining the quantity of neurons and the topology of the structure to be used and it starts with a minimal neurons structure that is incremented during training until it reaches a maximum number user defined limits for clusters. Hence, in this work we have used Growing Neural Gas as underlying cluster algorithm that produces the initial set of labeled cluster from training data set and thereafter this set of clusters is used to predict the quality of test data set of software modules. The best testing results shows 80% accuracy in evaluating the quality of software modules. Hence, the proposed technique can be used by programmers in evaluating the quality of modules during software development.

Keywords: Growing Neural Gas, data clustering, fault prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1849
238 Optimal Efficiency Control of Pulse Width Modulation - Inverter Fed Motor Pump Drive Using Neural Network

Authors: O. S. Ebrahim, M. A. Badr, A. S. Elgendy, K. O. Shawky, P. K. Jain

Abstract:

This paper demonstrates an improved Loss Model Control (LMC) for a 3-phase induction motor (IM) driving pump load. Compared with other power loss reduction algorithms for IM, the presented one has the advantages of fast and smooth flux adaptation, high accuracy, and versatile implementation. The performance of LMC depends mainly on the accuracy of modeling the motor drive and losses. A loss-model for IM drive that considers the surplus power loss caused by inverter voltage harmonics using closed-form equations and also includes the magnetic saturation has been developed. Further, an Artificial Neural Network (ANN) controller is synthesized and trained offline to determine the optimal flux level that achieves maximum drive efficiency. The drive’s voltage and speed control loops are connecting via the stator frequency to avoid the possibility of excessive magnetization. Besides, the resistance change due to temperature is considered by a first-order thermal model. The obtained thermal information enhances motor protection and control. These together have the potential of making the proposed algorithm reliable. Simulation and experimental studies are performed on 5.5 kW test motor using the proposed control method. The test results are provided and compared with the fixed flux operation to validate the effectiveness.

Keywords: Artificial neural network, ANN, efficiency optimization, induction motor, IM, Pulse Width Modulated, PWM, harmonic losses.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 326
237 Roundabout Optimal Entry and Circulating Flow Induced by Road Hump

Authors: Amir Hossein Pakshir, A. Hossein Pour, N. Jahandar, Ali Paydar

Abstract:

Roundabout work on the principle of circulation and entry flows, where the maximum entry flow rates depend largely on circulating flow bearing in mind that entry flows must give away to circulating flows. Where an existing roundabout has a road hump installed at the entry arm, it can be hypothesized that the kinematics of vehicles may prevent the entry arm from achieving optimum performance. Road humps are traffic calming devices placed across road width solely as speed reduction mechanism. They are the preferred traffic calming option in Malaysia and often used on single and dual carriageway local routes. The speed limit on local routes is 30mph (50 km/hr). Road humps in their various forms achieved the biggest mean speed reduction (based on a mean speed before traffic calming of 30mph) of up to 10mph or 16 km/hr according to the UK Department of Transport. The underlying aim of reduced speed should be to achieve a 'safe' distribution of speeds which reflects the function of the road and the impacts on the local community. Constraining safe distribution of speeds may lead to poor drivers timing and delayed reflex reaction that can probably cause accident. Previous studies on road hump impact have focused mainly on speed reduction, traffic volume, noise and vibrations, discomfort and delay from the use of road humps. The paper is aimed at optimal entry and circulating flow induced by road humps. Results show that roundabout entry and circulating flow perform better in circumstances where there is no road hump at entrance.

Keywords: Road hump, Roundabout, Speed Reduction

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2994
236 Assessment and Uncertainty Analysis of ROSA/LSTF Test on Pressurized Water Reactor 1.9% Vessel Upper Head Small-Break Loss-of-Coolant Accident

Authors: Takeshi Takeda

Abstract:

An experiment utilizing the ROSA/LSTF (rig of safety assessment/large-scale test facility) simulated a 1.9% vessel upper head small-break loss-of-coolant accident with an accident management (AM) measure under the total failure of high-pressure injection system of emergency core cooling system in a pressurized water reactor. Steam generator (SG) secondary-side depressurization on the AM measure was started by fully opening relief valves in both SGs when the maximum core exit temperature rose to 623 K. A large increase took place in the cladding surface temperature of simulated fuel rods on account of a late and slow response of core exit thermocouples during core boil-off. The author analyzed the LSTF test by reference to the matrix of an integral effect test for the validation of a thermal-hydraulic system code. Problems remained in predicting the primary coolant distribution and the core exit temperature with the RELAP5/MOD3.3 code. The uncertainty analysis results of the RELAP5 code confirmed that the sample size with respect to the order statistics influences the value of peak cladding temperature with a 95% probability at a 95% confidence level, and the Spearman’s rank correlation coefficient.

Keywords: LSTF, LOCA, uncertainty analysis, RELAP5.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 708
235 Study on Compressive Strength and Setting Times of Fly Ash Concrete after Slump Recovery Using Superplasticizer

Authors: Chaiyakrit Raoupatham, Ram Hari Dhakal, Chalermchai Wanichlamlert

Abstract:

Fresh concrete has one of dynamic properties known as slump. Slump of concrete is design to compatible with placing method. Due to hydration reaction of cement, the slump of concrete is loss through time. Therefore, delayed concrete probably get reject because slump is unacceptable. In order to recover the slump of delayed concrete the second dose of superplasticizer (naphthalene based type F) is added into the system, the slump recovery can be done as long as the concrete is not setting. By adding superplasticizer as solution for recover unusable slump loss concrete may affects other concrete properties. Therefore, this paper was observed setting times and compressive strength of concrete after being re-dose with chemical admixture type F (superplasticizer, naphthalene based) for slump recovery. The concrete used in this study was fly ash concrete with fly ash replacement of 0%, 30% and 50% respectively. Concrete mix designed for test specimen was prepared with paste content (ratio of volume of cement to volume of void in the aggregate) of 1.2 and 1.3, water-to-binder ratio (w/b) range of 0.3 to 0.58, initial dose of superplasticizer (SP) range from 0.5 to 1.6%. The setting times of concrete were tested both before and after re-dosed with different amount of second dose and time of dosing. The research was concluded that addition of second dose of superplasticizer would increase both initial and final setting times accordingly to dosage of addition. As for fly ash concrete, the prolongation effect was higher as the replacement of fly ash increase. The prolongation effect can reach up to maximum about 4 hours. In case of compressive strength, the re-dosed concrete has strength fluctuation within acceptable range of ±10%.

Keywords: Compressive strength, Fly ash concrete, Second dose of superplasticizer, Slump recovery, Setting times.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1924
234 Autonomous Robots- Visual Perception in Underground Terrains Using Statistical Region Merging

Authors: Omowunmi E. Isafiade, Isaac O. Osunmakinde, Antoine B. Bagula

Abstract:

Robots- visual perception is a field that is gaining increasing attention from researchers. This is partly due to emerging trends in the commercial availability of 3D scanning systems or devices that produce a high information accuracy level for a variety of applications. In the history of mining, the mortality rate of mine workers has been alarming and robots exhibit a great deal of potentials to tackle safety issues in mines. However, an effective vision system is crucial to safe autonomous navigation in underground terrains. This work investigates robots- perception in underground terrains (mines and tunnels) using statistical region merging (SRM) model. SRM reconstructs the main structural components of an imagery by a simple but effective statistical analysis. An investigation is conducted on different regions of the mine, such as the shaft, stope and gallery, using publicly available mine frames, with a stream of locally captured mine images. An investigation is also conducted on a stream of underground tunnel image frames, using the XBOX Kinect 3D sensors. The Kinect sensors produce streams of red, green and blue (RGB) and depth images of 640 x 480 resolution at 30 frames per second. Integrating the depth information to drivability gives a strong cue to the analysis, which detects 3D results augmenting drivable and non-drivable regions in 2D. The results of the 2D and 3D experiment with different terrains, mines and tunnels, together with the qualitative and quantitative evaluation, reveal that a good drivable region can be detected in dynamic underground terrains.

Keywords: Drivable Region Detection, Kinect Sensor, Robots' Perception, SRM, Underground Terrains.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1819
233 Evaluation of Dynamic Behavior a Machine Tool Spindle System through Modal and Unbalance Response Analysis

Authors: Khairul Jauhari, Achmad Widodo, Ismoyo Haryanto

Abstract:

The spindle system is one of the most important components of machine tool. The dynamic properties of the spindle affect the machining productivity and quality of the work pieces. Thus, it is important and necessary to determine its dynamic characteristics of spindles in the design and development in order to avoid forced resonance. The finite element method (FEM) has been adopted in order to obtain the dynamic behavior of spindle system. For this reason, obtaining the Campbell diagrams and determining the critical speeds are very useful to evaluate the spindle system dynamics. The unbalance response of the system to the center of mass unbalance at the cutting tool is also calculated to investigate the dynamic behavior. In this paper, we used an ANSYS Parametric Design Language (APDL) program which based on finite element method has been implemented to make the full dynamic analysis and evaluation of the results. Results show that the calculated critical speeds are far from the operating speed range of the spindle, thus, the spindle would not experience resonance, and the maximum unbalance response at operating speed is still with acceptable limit. ANSYS Parametric Design Language (APDL) can be used by spindle designer as tools in order to increase the product quality, reducing cost, and time consuming in the design and development stages.

Keywords: ANSYS parametric design language (APDL), Campbell diagram, Critical speeds, Unbalance response, The Spindle system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2820
232 The Effects of Three Months of HIIT on Plasma Adiponectin on Overweight College Men

Authors: M. J. Pourvaghar, M. E. Bahram, M. Sayyah, Sh. Khoshemehry

Abstract:

Adiponectin is a cytokine secreted by the adipose tissue that functions as an anti-inflammatory, antiathrogenic and anti-diabetic substance. Its density is inversely correlated with body mass index. The purpose of this research was to examine the effect of 12 weeks of high intensity interval training (HIIT) with the level of serum adiponectin and some selected adiposity markers in overweight and fat college students. This was a clinical research in which 24 students with BMI between 25 kg/m2 to 30 kg/m2. The sample was purposefully selected and then randomly assigned into two groups of experimental (age =22.7±1.5 yr.; weight = 85.8±3.18 kg and height =178.7±3.29 cm) and control (age =23.1±1.1 yr.; weight = 79.1±2.4 kg and height =181.3±4.6 cm), respectively. The experimental group participated in an aerobic exercise program for 12 weeks, three sessions per weeks at a high intensity between 85% to 95% of maximum heart rate (considering the over load principle). Prior and after the termination of exercise protocol, the level of serum adiponectin, BMI, waist to hip ratio, and body fat percentages were calculated. The data were analyzed by using SPSS: PC 16.0 and statistical procedure such as ANCOVA, was used. The results indicated that 12 weeks of intensive interval training led to the increase of serum adiponectin level and decrease of body weight, body fat percent, body mass index and waist to hip ratio (P < 0.05). Based on the results of this research, it may be concluded that participation in intensive interval training for 12 weeks is a non-invasive treatment to increase the adiponectin level while decreasing some of the anthropometric indices associated with obesity or being overweight.

Keywords: Adiponectin, interval, intensive, overweight, training.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1115
231 Optimal Duty-Cycle Modulation Scheme for Analog-To-Digital Conversion Systems

Authors: G. Sonfack, J. Mbihi, B. Lonla Moffo

Abstract:

This paper presents an optimal duty-cycle modulation (ODCM) scheme for analog-to-digital conversion (ADC) systems. The overall ODCM-Based ADC problem is decoupled into optimal DCM and digital filtering sub-problems, while taking into account constraints of mutual design parameters between the two. Using a set of three lemmas and four morphological theorems, the ODCM sub-problem is modelled as a nonlinear cost function with nonlinear constraints. Then, a weighted least pth norm of the error between ideal and predicted frequency responses is used as a cost function for the digital filtering sub-problem. In addition, MATLAB fmincon and MATLAB iirlnorm tools are used as optimal DCM and least pth norm solvers respectively. Furthermore, the virtual simulation scheme of an overall prototyping ODCM-based ADC system is implemented and well tested with the help of Simulink tool according to relevant set of design data, i.e., 3 KHz of modulating bandwidth, 172 KHz of maximum modulation frequency and 25 MHZ of sampling frequency. Finally, the results obtained and presented show that the ODCM-based ADC achieves under 3 KHz of modulating bandwidth: 57 dBc of SINAD (signal-to-noise and distorsion), 58 dB of SFDR (Surpious free dynamic range) -80 dBc of THD (total harmonic distorsion), and 10 bits of minimum resolution. These performance levels appear to be a great challenge within the class of oversampling ADC topologies, with 2nd order IIR (infinite impulse response) decimation filter.

Keywords: Digital IIR filter, morphological lemmas and theorems, optimal DCM-based DAC, virtual simulation, weighted least pth norm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 916
230 A Study of the Planning and Designing of the Built Environment under the Green Transit-Oriented Development

Authors: Wann-Ming Wey

Abstract:

In recent years, the problems of global climate change and natural disasters have induced the concerns and attentions of environmental sustainability issues for the public. Aside from the environmental planning efforts done for human environment, Transit-Oriented Development (TOD) has been widely used as one of the future solutions for the sustainable city development. In order to be more consistent with the urban sustainable development, the development of the built environment planning based on the concept of Green TOD which combines both TOD and Green Urbanism is adapted here. The connotation of the urban development under the green TOD including the design toward environment protect, the maximum enhancement resources and the efficiency of energy use, use technology to construct green buildings and protected areas, natural ecosystems and communities linked, etc. Green TOD is not only to provide the solution to urban traffic problems, but to direct more sustainable and greener consideration for future urban development planning and design. In this study, we use both the TOD and Green Urbanism concepts to proceed to the study of the built environment planning and design. Fuzzy Delphi Technique (FDT) is utilized to screen suitable criteria of the green TOD. Furthermore, Fuzzy Analytic Network Process (FANP) and Quality Function Deployment (QFD) were then developed to evaluate the criteria and prioritize the alternatives. The study results can be regarded as the future guidelines of the built environment planning and designing under green TOD development in Taiwan.

Keywords: Green transit-oriented development, built environment, fuzzy Delphi technique, quality function deployment, fuzzy analytic network process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1504
229 Regional Analysis of Streamflow Drought: A Case Study for Southwestern Iran

Authors: M. Byzedi, B. Saghafian

Abstract:

Droughts are complex, natural hazards that, to a varying degree, affect some parts of the world every year. The range of drought impacts is related to drought occurring in different stages of the hydrological cycle and usually different types of droughts, such as meteorological, agricultural, hydrological, and socioeconomical are distinguished. Streamflow drought was analyzed by the method of truncation level (at 70% level) on daily discharges measured in 54 hydrometric stations in southwestern Iran. Frequency analysis was carried out for annual maximum series (AMS) of drought deficit volume and duration series. Some factors including physiographic, climatic, geologic, and vegetation cover were studied as influential factors in the regional analysis. According to the results of factor analysis, six most effective factors were identified as area, rainfall from December to February, the percent of area with Normalized Difference Vegetation Index (NDVI) <0.1, the percent of convex area, drainage density and the minimum of watershed elevation that explained 90.9% of variance. The homogenous regions were determined by cluster analysis and discriminate function analysis. Suitable multivariate regression models were evaluated for streamflow drought deficit volume with 2 years return period. The significance level of regression models was 0.01. The results showed that the watershed area is the most effective factor with high correlation with deficit volume. Also, drought duration was not a suitable drought index for regional analysis.

Keywords: Iran, Streamflow drought, truncation level method, regional analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1726
228 A Study on the Performance Characteristics of Variable Valve for Reverse Continuous Damper

Authors: Se Kyung Oh, Young Hwan Yoon, Ary Bachtiar Krishna

Abstract:

Nowadays, a passenger car suspension must has high performance criteria with light weight, low cost, and low energy consumption. Pilot controlled proportional valve is designed and analyzed to get small pressure change rate after blow-off, and to get a fast response of the damper, a reverse damping mechanism is adapted. The reverse continuous variable damper is designed as a HS-SH damper which offers good body control with reduced transferred input force from the tire, compared with any other type of suspension system. The damper structure is designed, so that rebound and compression damping forces can be tuned independently, of which the variable valve is placed externally. The rate of pressure change with respect to the flow rate after blow-off becomes smooth when the fixed orifice size increases, which means that the blow-off slope is controllable using the fixed orifice size. Damping forces are measured with the change of the solenoid current at the different piston velocities to confirm the maximum hysteresis of 20 N, linearity, and variance of damping force. The damping force variance is wide and continuous, and is controlled by the spool opening, of which scheme is usually adapted in proportional valves. The reverse continuous variable damper developed in this study is expected to be utilized in the semi-active suspension systems in passenger cars after its performance and simplicity of the design is confirmed through a real car test.

Keywords: Blow-off, damping force, pilot controlledproportional valve, reverse continuous damper.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2426
227 Effect of Jatropha curcas Leaf Extract on Castor Oil Induced Diarrhea in Albino Rats

Authors: Fatima U. Maigari, Musa Halilu, M. Maryam Umar, Rabiu Zainab

Abstract:

Plants as therapeutic agents are used as drug in many parts of the world. Medicinal plants are mostly used in developing countries due to culture acceptability, belief or due to lack of easy access to primary health care services. Jatropha curcas is a plant from the Euphorbiaceae family which is widely used in Northern Nigeria as an anti-diarrheal agent. This study was conducted to determine the anti-diarrheal effect of the leaf extract on castor oil induced diarrhea in albino rats. The leaves of J. curcas were collected from Balanga Local government in Gombe State, north-eastern Nigeria; due to its bioavailability. The leaves were air-dried at room temperature and ground to powder. Phytochemical screening was done and different concentrations of the extract was prepared and administered to the different categories of experimental animals. From the results, aqueous leaf extract of Jatropha curcas at doses of 200mg/Kg and 400mg/Kg was found to reduce the mean stool score as compared to control rats, however, maximum reduction was achieved with the standard drug of Loperamide (5mg/Kg). Treatment of diarrhea with 200mg/Kg of the extract did not produce any significant decrease in stool fluid content but was found to be significant in those rats that were treated with 400mg/Kg of the extract at 2hours (0.05±0.02) and 4hours (0.01±0.01). A significant reduction of diarrhea in the experimental animals signifies it to possess some anti-diarrheal activity.

Keywords: Anti-diarrhea, Diarrhea, Jatropha curcas, Loperamide.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1841
226 On the Design of Shape Memory Alloy Locking Mechanism: A Novel Solution for Laparoscopic Ligation Process

Authors: Reza Yousefian, Michael A. Kia, Mehrdad Hosseini Zadeh

Abstract:

The blood ducts must be occluded to avoid loss of blood from vessels in laparoscopic surgeries. This paper presents a locking mechanism to be used in a ligation laparoscopic procedure (LigLAP I), as an alternative solution for a stapling procedure. Currently, stapling devices are being used to occlude vessels. Using these devices may result in some problems, including injury of bile duct, taking up a great deal of space behind the vessel, and bile leak. In this new procedure, a two-layer suture occludes a vessel. A locking mechanism is also required to hold the suture. Since there is a limited space at the device tip, a Shape Memory Alloy (SMA) actuator is used in this mechanism. Suitability for cleanroom applications, small size, and silent performance are among the advantages of SMA actuators in biomedical applications. An experimental study is conducted to examine the function of the locking mechanism. To set up the experiment, a prototype of a locking mechanism is built using nitinol, which is a nickel-titanium shape memory alloy. The locking mechanism successfully locks a polymer suture for all runs of the experiment. In addition, the effects of various surface materials on the applied pulling forces are studied. Various materials are mounted at the mechanism tip to compare the maximum pulling forces applied to the suture for each material. The results show that the various surface materials on the device tip provide large differences in the applied pulling forces.

Keywords: Laparoscopic surgery, ligation process, locking mechanism, Shape Memory Alloy (SMA) actuator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2417
225 Development of an Ensemble Classification Model Based on Hybrid Filter-Wrapper Feature Selection for Email Phishing Detection

Authors: R. B. Ibrahim, M. S. Argungu, I. M. Mungadi

Abstract:

It is obvious in this present time, internet has become an indispensable part of human life since its inception. The Internet has provided diverse opportunities to make life so easy for human beings, through the adoption of various channels. Among these channels are email, internet banking, video conferencing, and the like. Email is one of the easiest means of communication hugely accepted among individuals and organizations globally. But over decades the security integrity of this platform has been challenged with malicious activities like Phishing. Email phishing is designed by phishers to fool the recipient into handing over sensitive personal information such as passwords, credit card numbers, account credentials, social security numbers, etc. This activity has caused a lot of financial damage to email users globally which has resulted in bankruptcy, sudden death of victims, and other health-related sicknesses. Although many methods have been proposed to detect email phishing, in this research, the results of multiple machine-learning methods for predicting email phishing have been compared with the use of filter-wrapper feature selection. It is worth noting that all three models performed substantially but one outperformed the other. The dataset used for these models is obtained from Kaggle online data repository, while three classifiers: decision tree, Naïve Bayes, and Logistic regression are ensemble (Bagging) respectively. Results from the study show that the Decision Tree (CART) bagging ensemble recorded the highest accuracy of 98.13% using PEF (Phishing Essential Features). This result further demonstrates the dependability of the proposed model.

Keywords: Ensemble, hybrid, filter-wrapper, phishing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 148
224 Stereo Motion Tracking

Authors: Yudhajit Datta, Jonathan Bandi, Ankit Sethia, Hamsi Iyer

Abstract:

Motion Tracking and Stereo Vision are complicated, albeit well-understood problems in computer vision. Existing softwares that combine the two approaches to perform stereo motion tracking typically employ complicated and computationally expensive procedures. The purpose of this study is to create a simple and effective solution capable of combining the two approaches. The study aims to explore a strategy to combine the two techniques of two-dimensional motion tracking using Kalman Filter; and depth detection of object using Stereo Vision. In conventional approaches objects in the scene of interest are observed using a single camera. However for Stereo Motion Tracking; the scene of interest is observed using video feeds from two calibrated cameras. Using two simultaneous measurements from the two cameras a calculation for the depth of the object from the plane containing the cameras is made. The approach attempts to capture the entire three-dimensional spatial information of each object at the scene and represent it through a software estimator object. In discrete intervals, the estimator tracks object motion in the plane parallel to plane containing cameras and updates the perpendicular distance value of the object from the plane containing the cameras as depth. The ability to efficiently track the motion of objects in three-dimensional space using a simplified approach could prove to be an indispensable tool in a variety of surveillance scenarios. The approach may find application from high security surveillance scenes such as premises of bank vaults, prisons or other detention facilities; to low cost applications in supermarkets and car parking lots.

Keywords: Kalman Filter, Stereo Vision, Motion Tracking, Matlab, Object Tracking, Camera Calibration, Computer Vision System Toolbox.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2812
223 Analysis of Driver Point of Regard Determinations with Eye-Gesture Templates Using Receiver Operating Characteristic

Authors: Siti Nor Hafizah binti Mohd Zaid, Mohamed Abdel-Maguid, Abdel-Hamid Soliman

Abstract:

An Advance Driver Assistance System (ADAS) is a computer system on board a vehicle which is used to reduce the risk of vehicular accidents by monitoring factors relating to the driver, vehicle and environment and taking some action when a risk is identified. Much work has been done on assessing vehicle and environmental state but there is still comparatively little published work that tackles the problem of driver state. Visual attention is one such driver state. In fact, some researchers claim that lack of attention is the main cause of accidents as factors such as fatigue, alcohol or drug use, distraction and speeding all impair the driver-s capacity to pay attention to the vehicle and road conditions [1]. This seems to imply that the main cause of accidents is inappropriate driver behaviour in cases where the driver is not giving full attention while driving. The work presented in this paper proposes an ADAS system which uses an image based template matching algorithm to detect if a driver is failing to observe particular windscreen cells. This is achieved by dividing the windscreen into 24 uniform cells (4 rows of 6 columns) and matching video images of the driver-s left eye with eye-gesture templates drawn from images of the driver looking at the centre of each windscreen cell. The main contribution of this paper is to assess the accuracy of this approach using Receiver Operating Characteristic analysis. The results of our evaluation give a sensitivity value of 84.3% and a specificity value of 85.0% for the eye-gesture template approach indicating that it may be useful for driver point of regard determinations.

Keywords: Advanced Driver Assistance Systems, Eye-Tracking, Hazard Detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1619
222 Space Telemetry Anomaly Detection Based on Statistical PCA Algorithm

Authors: B. Nassar, W. Hussein, M. Mokhtar

Abstract:

The critical concern of satellite operations is to ensure the health and safety of satellites. The worst case in this perspective is probably the loss of a mission, but the more common interruption of satellite functionality can result in compromised mission objectives. All the data acquiring from the spacecraft are known as Telemetry (TM), which contains the wealth information related to the health of all its subsystems. Each single item of information is contained in a telemetry parameter, which represents a time-variant property (i.e. a status or a measurement) to be checked. As a consequence, there is a continuous improvement of TM monitoring systems to reduce the time required to respond to changes in a satellite's state of health. A fast conception of the current state of the satellite is thus very important to respond to occurring failures. Statistical multivariate latent techniques are one of the vital learning tools that are used to tackle the problem above coherently. Information extraction from such rich data sources using advanced statistical methodologies is a challenging task due to the massive volume of data. To solve this problem, in this paper, we present a proposed unsupervised learning algorithm based on Principle Component Analysis (PCA) technique. The algorithm is particularly applied on an actual remote sensing spacecraft. Data from the Attitude Determination and Control System (ADCS) was acquired under two operation conditions: normal and faulty states. The models were built and tested under these conditions, and the results show that the algorithm could successfully differentiate between these operations conditions. Furthermore, the algorithm provides competent information in prediction as well as adding more insight and physical interpretation to the ADCS operation.

Keywords: Space telemetry monitoring, multivariate analysis, PCA algorithm, space operations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2048
221 Three Dimensional Large Eddy Simulation of Blood Flow and Deformation in an Elastic Constricted Artery

Authors: Xi Gu, Guan Heng Yeoh, Victoria Timchenko

Abstract:

In the current work, a three-dimensional geometry of a 75% stenosed blood vessel is analyzed. Large eddy simulation (LES) with the help of a dynamic subgrid scale Smagorinsky model is applied to model the turbulent pulsatile flow. The geometry, the transmural pressure and the properties of the blood and the elastic boundary were based on clinical measurement data. For the flexible wall model, a thin solid region is constructed around the 75% stenosed blood vessel. The deformation of this solid region was modelled as a deforming boundary to reduce the computational cost of the solid model. Fluid-structure interaction is realized via a twoway coupling between the blood flow modelled via LES and the deforming vessel. The information of the flow pressure and the wall motion was exchanged continually during the cycle by an arbitrary Lagrangian-Eulerian method. The boundary condition of current time step depended on previous solutions. The fluctuation of the velocity in the post-stenotic region was analyzed in the study. The axial velocity at normalized position Z=0.5 shows a negative value near the vessel wall. The displacement of the elastic boundary was concerned in this study. In particular, the wall displacement at the systole and the diastole were compared. The negative displacement at the stenosis indicates a collapse at the maximum velocity and the deceleration phase.

Keywords: Large Eddy Simulation, Fluid Structural Interaction, Constricted Artery, Computational Fluid Dynamics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2330
220 Linguistic, Pragmatic and Evolutionary Factors in Wason Selection Task

Authors: Olimpia Matarazzo, Fabrizio Ferrara

Abstract:

In two studies we tested the hypothesis that the appropriate linguistic formulation of a deontic rule – i.e. the formulation which clarifies the monadic nature of deontic operators - should produce more correct responses than the conditional formulation in Wason selection task. We tested this assumption by presenting a prescription rule and a prohibition rule in conditional vs. proper deontic formulation. We contrasted this hypothesis with two other hypotheses derived from social contract theory and relevance theory. According to the first theory, a deontic rule expressed in terms of cost-benefit should elicit a cheater detection module, sensible to mental states attributions and thus able to discriminate intentional rule violations from accidental rule violations. We tested this prevision by distinguishing the two types of violations. According to relevance theory, performance in selection task should improve by increasing cognitive effect and decreasing cognitive effort. We tested this prevision by focusing experimental instructions on the rule vs. the action covered by the rule. In study 1, in which 480 undergraduates participated, we tested these predictions through a 2 x 2 x 2 x 2 (type of the rule x rule formulation x type of violation x experimental instructions) between-subjects design. In study 2 – carried out by means of a 2 x 2 (rule formulation x type of violation) between-subjects design - we retested the hypothesis of rule formulation vs. the cheaterdetection hypothesis through a new version of selection task in which intentional vs. accidental rule violations were better discriminated. 240 undergraduates participated in this study. Results corroborate our hypothesis and challenge the contrasting assumptions. However, they show that the conditional formulation of deontic rules produces a lower performance than what is reported in literature.

Keywords: Deontic reasoning; Evolutionary, linguistic, logical, pragmatic factors; Wason selection task

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1591
219 Application of Interferometric Techniques for Quality Control of Oils Used in the Food Industry

Authors: Andres Piña, Amy Meléndez, Pablo Cano, Tomas Cahuich

Abstract:

The purpose of this project is to propose a quick and environmentally friendly alternative to measure the quality of oils used in food industry. There is evidence that repeated and indiscriminate use of oils in food processing cause physicochemical changes with formation of potentially toxic compounds that can affect the health of consumers and cause organoleptic changes. In order to assess the quality of oils, non-destructive optical techniques such as Interferometry offer a rapid alternative to the use of reagents, using only the interaction of light on the oil. Through this project, we used interferograms of samples of oil placed under different heating conditions to establish the changes in their quality. These interferograms were obtained by means of a Mach-Zehnder Interferometer using a beam of light from a HeNe laser of 10mW at 632.8nm. Each interferogram was captured, analyzed and measured full width at half-maximum (FWHM) using the software from Amcap and ImageJ. The total of FWHMs was organized in three groups. It was observed that the average obtained from each of the FWHMs of group A shows a behavior that is almost linear, therefore it is probable that the exposure time is not relevant when the oil is kept under constant temperature. Group B exhibits a slight exponential model when temperature raises between 373 K and 393 K. Results of the t-Student show a probability of 95% (0.05) of the existence of variation in the molecular composition of both samples. Furthermore, we found a correlation between the Iodine Indexes (Physicochemical Analysis) and the Interferograms (Optical Analysis) of group C. Based on these results, this project highlights the importance of the quality of the oils used in food industry and shows how Interferometry can be a useful tool for this purpose.

Keywords: Food industry, interferometric, oils, quality control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2165
218 A Study of RSCMAC Enhanced GPS Dynamic Positioning

Authors: Ching-Tsan Chiang, Sheng-Jie Yang, Jing-Kai Huang

Abstract:

The purpose of this research is to develop and apply the RSCMAC to enhance the dynamic accuracy of Global Positioning System (GPS). GPS devices provide services of accurate positioning, speed detection and highly precise time standard for over 98% area on the earth. The overall operation of Global Positioning System includes 24 GPS satellites in space; signal transmission that includes 2 frequency carrier waves (Link 1 and Link 2) and 2 sets random telegraphic codes (C/A code and P code), on-earth monitoring stations or client GPS receivers. Only 4 satellites utilization, the client position and its elevation can be detected rapidly. The more receivable satellites, the more accurate position can be decoded. Currently, the standard positioning accuracy of the simplified GPS receiver is greatly increased, but due to affected by the error of satellite clock, the troposphere delay and the ionosphere delay, current measurement accuracy is in the level of 5~15m. In increasing the dynamic GPS positioning accuracy, most researchers mainly use inertial navigation system (INS) and installation of other sensors or maps for the assistance. This research utilizes the RSCMAC advantages of fast learning, learning convergence assurance, solving capability of time-related dynamic system problems with the static positioning calibration structure to improve and increase the GPS dynamic accuracy. The increasing of GPS dynamic positioning accuracy can be achieved by using RSCMAC system with GPS receivers collecting dynamic error data for the error prediction and follows by using the predicted error to correct the GPS dynamic positioning data. The ultimate purpose of this research is to improve the dynamic positioning error of cheap GPS receivers and the economic benefits will be enhanced while the accuracy is increased.

Keywords: Dynamic Error, GPS, Prediction, RSCMAC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1671
217 The DAQ Debugger for iFDAQ of the COMPASS Experiment

Authors: Y. Bai, M. Bodlak, V. Frolov, S. Huber, V. Jary, I. Konorov, D. Levit, J. Novy, D. Steffen, O. Subrt, M. Virius

Abstract:

In general, state-of-the-art Data Acquisition Systems (DAQ) in high energy physics experiments must satisfy high requirements in terms of reliability, efficiency and data rate capability. This paper presents the development and deployment of a debugging tool named DAQ Debugger for the intelligent, FPGA-based Data Acquisition System (iFDAQ) of the COMPASS experiment at CERN. Utilizing a hardware event builder, the iFDAQ is designed to be able to readout data at the average maximum rate of 1.5 GB/s of the experiment. In complex softwares, such as the iFDAQ, having thousands of lines of code, the debugging process is absolutely essential to reveal all software issues. Unfortunately, conventional debugging of the iFDAQ is not possible during the real data taking. The DAQ Debugger is a tool for identifying a problem, isolating the source of the problem, and then either correcting the problem or determining a way to work around it. It provides the layer for an easy integration to any process and has no impact on the process performance. Based on handling of system signals, the DAQ Debugger represents an alternative to conventional debuggers provided by most integrated development environments. Whenever problem occurs, it generates reports containing all necessary information important for a deeper investigation and analysis. The DAQ Debugger was fully incorporated to all processes in the iFDAQ during the run 2016. It helped to reveal remaining software issues and improved significantly the stability of the system in comparison with the previous run. In the paper, we present the DAQ Debugger from several insights and discuss it in a detailed way.

Keywords: DAQ debugger, data acquisition system, FPGA, system signals, Qt framework.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 883
216 Progressive AAM Based Robust Face Alignment

Authors: Daehwan Kim, Jaemin Kim, Seongwon Cho, Yongsuk Jang, Sun-Tae Chung, Boo-Gyoun Kim

Abstract:

AAM has been successfully applied to face alignment, but its performance is very sensitive to initial values. In case the initial values are a little far distant from the global optimum values, there exists a pretty good possibility that AAM-based face alignment may converge to a local minimum. In this paper, we propose a progressive AAM-based face alignment algorithm which first finds the feature parameter vector fitting the inner facial feature points of the face and later localize the feature points of the whole face using the first information. The proposed progressive AAM-based face alignment algorithm utilizes the fact that the feature points of the inner part of the face are less variant and less affected by the background surrounding the face than those of the outer part (like the chin contour). The proposed algorithm consists of two stages: modeling and relation derivation stage and fitting stage. Modeling and relation derivation stage first needs to construct two AAM models: the inner face AAM model and the whole face AAM model and then derive relation matrix between the inner face AAM parameter vector and the whole face AAM model parameter vector. In the fitting stage, the proposed algorithm aligns face progressively through two phases. In the first phase, the proposed algorithm will find the feature parameter vector fitting the inner facial AAM model into a new input face image, and then in the second phase it localizes the whole facial feature points of the new input face image based on the whole face AAM model using the initial parameter vector estimated from using the inner feature parameter vector obtained in the first phase and the relation matrix obtained in the first stage. Through experiments, it is verified that the proposed progressive AAM-based face alignment algorithm is more robust with respect to pose, illumination, and face background than the conventional basic AAM-based face alignment algorithm.

Keywords: Face Alignment, AAM, facial feature detection, model matching.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1626
215 FACTS Based Stabilization for Smart Grid Applications

Authors: Adel M. Sharaf, Foad H. Gandoman

Abstract:

Nowadays, Photovoltaic-PV Farms/ Parks and large PV-Smart Grid Interface Schemes are emerging and commonly utilized in Renewable Energy distributed generation. However, PVhybrid- Dc-Ac Schemes using interface power electronic converters usually has negative impact on power quality and stabilization of modern electrical network under load excursions and network fault conditions in smart grid. Consequently, robust FACTS based interface schemes are required to ensure efficient energy utilization and stabilization of bus voltages as well as limiting switching/fault onrush current condition. FACTS devices are also used in smart grid- Battery Interface and Storage Schemes with PV-Battery Storage hybrid systems as an elegant alternative to renewable energy utilization with backup battery storage for electric utility energy and demand side management to provide needed energy and power capacity under heavy load conditions. The paper presents a robust interface PV-Li-Ion Battery Storage Interface Scheme for Distribution/Utilization Low Voltage Interface using FACTS stabilization enhancement and dynamic maximum PV power tracking controllers. Digital simulation and validation of the proposed scheme is done using MATLAB/Simulink software environment for Low Voltage- Distribution/Utilization system feeding a hybrid Linear-Motorized inrush and nonlinear type loads from a DC-AC Interface VSC-6- pulse Inverter Fed from the PV Park/Farm with a back-up Li-Ion Storage Battery.

Keywords: AC FACTS, Smart grid, Stabilization, PV-Battery Storage, Switched Filter-Compensation (SFC).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3230
214 Response Surface Methodology Approach to Defining Ultrafiltration of Steepwater from Corn Starch Industry

Authors: Zita I. Šereš, Ljubica P. Dokić, Dragana M. Šoronja Simović, Cecilia Hodur, Zsuzsanna Laszlo, Ivana Nikolić, Nikola Maravić

Abstract:

In this work the concentration of steepwater from corn starch industry is monitored using ultrafiltration membrane. The aim was to examine the conditions of ultrafiltration of steepwater by applying the membrane of 2.5nm. The parameters that vary during the course of ultrafiltration, were the transmembrane pressure, flow rate, while the permeate flux and the dry matter content of permeate and retentate were the dependent parameter constantly monitored during the process. Experiments of ultrafiltration are conducted on the samples of steepwater, which were obtained from the starch wet milling plant „Jabuka“ Pancevo. The procedure of ultrafiltration on a single-channel 250mm lenght, with inner diameter of 6.8mm and outer diameter of 10mm membrane were carried on. The membrane is made of a-Al2O3 with TiO2 layer obtained from GEA (Germany). The experiments are carried out at a flow rate ranging from 100 to 200lh-1 and transmembrane pressure of 1-3 bars. During the experiments of steepwater ultrafiltration, the change of permeate flux, dry matter content of permeate and retentate, as well as the absorbance changes of the permeate and retentate were monitored. The experimental results showed that the maximum flux reaches about 40lm-2h-1. For responses obtained after experiments, a polynomial model of the second degree is established to evaluate and quantify the influence of the variables. The quadratic equitation fits with the experimental values, where the coefficient of determination for flux is 0.96. The dry matter content of the retentate is increased for about 6%, while the dry matter content of permeate was reduced for about 35-40%, respectively. During steepwater ultrafiltration in permeate stays 40% less dry matter compared to the feed.

Keywords: Ultrafiltration, steepwater, starch industry, ceramic membrane.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2127
213 Gender Perspective Considerations in Disasters like Earthquakes and Floods of Pakistan

Authors: Muhammad Naseem Baig, Razia Sharif

Abstract:

From past many decades human beings are suffering from plethora of natural disasters. Occurrence of disasters is a frequent process; it changes conceptual myths as more and more advancement are made. Although we are living in technological era but in developing countries like Pakistan disasters are shaped by socially constructed roles. The need is to understand the most vulnerable group of society i.e. females; their issues are complex in nature because of undermined gender status in the society. There is a need to identify maximum issues regarding females and to enhance the achievement of millennium development goals (MDGs). Gender issues are of great concern all around the globe including Pakistan. Here female visibility in society is low, and also during disasters, the failure to understand the reality that concentrates on double burden including productive and reproductive care. Women have to contribute a lot in society so we need to make them more disaster resilient. For this non-structural measures like awareness, trainings and education must be carried out. In rural and in urban settings in any disaster like earthquake or flood, elements like gender perspective, their age, physical health, demographic issues contribute towards vulnerability. In Pakistan the gender issues in disasters were of less concern before 2005 earthquake and 2010 floods. Significant achievements are made after 2010 floods when gender and child cell was created to provide all facilities to women and girls. The aim of the study is to highlight all necessary facilities in a disaster to build coping mechanism in females from basic rights till advance level including education.

Keywords: Disaster resilient, Gender cell, Millennium development.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2713
212 Profile Calculation in Water Phantom of Symmetric and Asymmetric Photon Beam

Authors: N. Chegeni, M. J. Tahmasebi Birgani

Abstract:

Nowadays, in most radiotherapy departments, the commercial treatment planning systems (TPS) used to calculate dose distributions needs to be verified; therefore, quick, easy-to-use and low cost dose distribution algorithms are desirable to test and verify the performance of the TPS. In this paper, we put forth an analytical method to calculate the phantom scatter contribution and depth dose on the central axis based on the equivalent square concept. Then, this method was generalized to calculate the profiles at any depth and for several field shapes regular or irregular fields under symmetry and asymmetry photon beam conditions. Varian 2100 C/D and Siemens Primus Plus Linacs with 6 and 18 MV photon beam were used for irradiations. Percentage depth doses (PDDs) were measured for a large number of square fields for both energies, and for 45º wedges which were employed to obtain the profiles in any depth. To assess the accuracy of the calculated profiles, several profile measurements were carried out for some treatment fields. The calculated and measured profiles were compared by gamma-index calculation. All γ–index calculations were based on a 3% dose criterion and a 3 mm dose-to-agreement (DTA) acceptance criterion. The γ values were less than 1 at most points. However, the maximum γ observed was about 1.10 in the penumbra region in most fields and in the central area for the asymmetric fields. This analytical approach provides a generally quick and fairly accurate algorithm to calculate dose distribution for some treatment fields in conventional radiotherapy.

Keywords: Dose distribution, equivalent field, asymmetric field, irregular field.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3028