Search results for: testing techniques
8268 Study on Mitigation Measures of Gumti Hydro Power Plant Using Analytic Hierarchy Process and Concordance Analysis Techniques
Authors: K. Majumdar, S. Datta
Abstract:
Electricity is recognized as fundamental to industrialization and improving the quality of life of the people. Harnessing the immense untapped hydropower potential in Tripura region opens avenues for growth and provides an opportunity to improve the well-being of the people of the region, while making substantial contribution to the national economy. Gumti hydro power plant generates power to mitigate the crisis of power in Tripura, India. The first unit of hydro power plant (5 MW) was commissioned in June 1976 & another two units of 5 MW was commissioned simultaneously. But out of 15 MW capacity at present only 8-9 MW power is produced from Gumti hydro power plant during rainy season. But during lean season the production reduces to 0.5 MW due to shortage of water. Now, it is essential to implement some mitigation measures so that the further atrocities can be prevented and originality will be possible to restore. The decision making ability of the Analytic Hierarchy Process (AHP) and Concordance Analysis Techniques (CAT) are utilized to identify the better decision or solution to the present problem. Some related attributes are identified by the method of surveying within the experts and the available reports and literatures. Similar criteria are removed and ultimately seven relevant ones are identified. All the attributes are compared with each other and rated accordingly to their importance over the other with the help of Pair wise Comparison Matrix. In the present investigation different mitigation measures are identified and compared to find the best suitable alternative which can solve the present uncertainties involving the existence of the Gumti Hydro Power Plant.Keywords: concordance analysis techniques, analytic hierarchy process, hydro power
Procedia PDF Downloads 3548267 A Refrigerated Condition for the Storage of Glucose Test Strips at Health Promoting Hospitals: An Implication for Hospitals with Limited Air Conditioners
Authors: Wanutchaya Duanginta, Napaporn Apiratmateekul, Tippawan Sangkaew, Sunaree Wekinhirun, Kunchit Kongros, Wanvisa Treebuphachatsakul
Abstract:
Thailand has a tropical climate with an average outdoor ambient air temperature of over 30°C, which can exceed manufacturer recommendations for the storage of glucose test strips. This study monitored temperature and humidity at actual sites of five sub-district health promoting hospitals (HPH) in Phitsanulok Province for the storage of glucose test strips in refrigerated conditions. Five calibrated data loggers were placed at the actual sites for glucose test strip storage at five HPHs for 8 weeks between April and June. For the stress test, two lot numbers of glucose test strips, each with two glucose meters, were kept in a plastic box with desiccants and placed in a refrigerator with the temperature calibrated to 4°C and at room temperature (RT). Temperature and humidity in the refrigerator and at RT were measured every hour for 30 days. The mean temperature for storing test strips at the five HPHs ranged from 29°C to 33°C, and three of the five HPHs (60%) had a mean temperature above 30°C. The refrigerator temperatures were 3.8 ± 2.0°C (2.0°C to 6.5°C), and relative humidity was 51 ± 2% (42 to 54%). The maximum of blood glucose testing by glucose meters when the test strips were stored in a refrigerator were not significantly different (p > 0.05) from unstressed test strips for both glucose meters using amperometry-GDH-PQQ and amperometry-GDH-FAD principles. Opening the test strip vial daily resulted in higher variation than when refrigerated after a single-use. However, the variations were still within an acceptable range. This study concludes that glucose tested strips can be stored in plastic boxes in a refrigerator if it is well-controlled for temperature and humidity. Storage of glucose-tested strips in the refrigerator during hot and humid weather may be useful for HPHs with limited air conditioners.Keywords: environmental stressed test, thermal stressed test, quality control, point-of-care testing
Procedia PDF Downloads 1948266 Hydrogen Purity: Developing Low-Level Sulphur Speciation Measurement Capability
Authors: Sam Bartlett, Thomas Bacquart, Arul Murugan, Abigail Morris
Abstract:
Fuel cell electric vehicles provide the potential to decarbonise road transport, create new economic opportunities, diversify national energy supply, and significantly reduce the environmental impacts of road transport. A potential issue, however, is that the catalyst used at the fuel cell cathode is susceptible to degradation by impurities, especially sulphur-containing compounds. A recent European Directive (2014/94/EU) stipulates that, from November 2017, all hydrogen provided to fuel cell vehicles in Europe must comply with the hydrogen purity specifications listed in ISO 14687-2; this includes reactive and toxic chemicals such as ammonia and total sulphur-containing compounds. This requirement poses great analytical challenges due to the instability of some of these compounds in calibration gas standards at relatively low amount fractions and the difficulty associated with undertaking measurements of groups of compounds rather than individual compounds. Without the available reference materials and analytical infrastructure, hydrogen refuelling stations will not be able to demonstrate compliance to the ISO 14687 specifications. The hydrogen purity laboratory at NPL provides world leading, accredited purity measurements to allow hydrogen refuelling stations to evidence compliance to ISO 14687. Utilising state-of-the-art methods that have been developed by NPL’s hydrogen purity laboratory, including a novel method for measuring total sulphur compounds at 4 nmol/mol and a hydrogen impurity enrichment device, we provide the capabilities necessary to achieve these goals. An overview of these capabilities will be given in this paper. As part of the EMPIR Hydrogen co-normative project ‘Metrology for sustainable hydrogen energy applications’, NPL are developing a validated analytical methodology for the measurement of speciated sulphur-containing compounds in hydrogen at low amount fractions pmol/mol to nmol/mol) to allow identification and measurement of individual sulphur-containing impurities in real samples of hydrogen (opposed to a ‘total sulphur’ measurement). This is achieved by producing a suite of stable gravimetrically-prepared primary reference gas standards containing low amount fractions of sulphur-containing compounds (hydrogen sulphide, carbonyl sulphide, carbon disulphide, 2-methyl-2-propanethiol and tetrahydrothiophene have been selected for use in this study) to be used in conjunction with novel dynamic dilution facilities to enable generation of pmol/mol to nmol/mol level gas mixtures (a dynamic method is required as compounds at these levels would be unstable in gas cylinder mixtures). Method development and optimisation are performed using gas chromatographic techniques assisted by cryo-trapping technologies and coupled with sulphur chemiluminescence detection to allow improved qualitative and quantitative analyses of sulphur-containing impurities in hydrogen. The paper will review the state-of-the art gas standard preparation techniques, including the use and testing of dynamic dilution technologies for reactive chemical components in hydrogen. Method development will also be presented highlighting the advances in the measurement of speciated sulphur compounds in hydrogen at low amount fractions.Keywords: gas chromatography, hydrogen purity, ISO 14687, sulphur chemiluminescence detector
Procedia PDF Downloads 2258265 Location Privacy Preservation of Vehicle Data In Internet of Vehicles
Authors: Ying Ying Liu, Austin Cooke, Parimala Thulasiraman
Abstract:
Internet of Things (IoT) has attracted a recent spark in research on Internet of Vehicles (IoV). In this paper, we focus on one research area in IoV: preserving location privacy of vehicle data. We discuss existing location privacy preserving techniques and provide a scheme for evaluating these techniques under IoV traffic condition. We propose a different strategy in applying Differential Privacy using k-d tree data structure to preserve location privacy and experiment on real world Gowalla data set. We show that our strategy produces differentially private data, good preservation of utility by achieving similar regression accuracy to the original dataset on an LSTM (Long Term Short Term Memory) neural network traffic predictor.Keywords: differential privacy, internet of things, internet of vehicles, location privacy, privacy preservation scheme
Procedia PDF Downloads 1808264 Application of Lean Six Sigma Tools to Minimize Time and Cost in Furniture Packaging
Authors: Suleiman Obeidat, Nabeel Mandahawi
Abstract:
In this work, the packaging process for a move is improved. The customers of this move need their household stuff to be moved from their current house to the new one with minimum damage, in an organized manner, on time and with the minimum cost. Our goal was to improve the process between 10% and 20% time efficiency, 90% reduction in damaged parts and an acceptable improvement in the cost of the total move process. The expected ROI was 833%. Many improvement techniques have been used in terms of the way the boxes are prepared, their preparation cost, packing the goods, labeling them and moving them to a place for moving out. DMAIC technique is used in this work: SIPOC diagram, value stream map of “As Is” process, Root Cause Analysis, Maps of “Future State” and “Ideal State” and an Improvement Plan. A value of ROI=624% is obtained which is lower than the expected value of 833%. The work explains the techniques of improvement and the deficiencies in the old process.Keywords: packaging, lean tools, six sigma, DMAIC methodology, SIPOC
Procedia PDF Downloads 4288263 Evaluating the Relationship between Overconfidence of Senior Managers and Abnormal Cash Fluctuations with Respect to Financial Flexibility in Companies Listed in Tehran Stock Exchange
Authors: Hadi Mousavi, Majid Davoudi Nasr
Abstract:
Executives can maximize profits by recognizing the factors that affect investment and using them to obtain the optimal level of investment. Inefficient markets have shortcomings that can impact the optimal level of investment, leading to the process of over-investment or under-investment. In the present study, the relationship between the overconfidence of senior managers and abnormal cash fluctuations with respect to financial flexibility in companies listed in the Tehran stock exchange from 2009 to 2013 were evaluated. In this study, the sample consists of 84 companies selected by a systematic elimination method and 420 year-companies in total. In this research, EVIEWS software was used to test the research hypotheses by linear regression and correlation coefficient and after designing and testing the research hypothesis. After designing and testing research hypotheses that have been used to each hypothesis, it was concluded that there was a significant relationship between the overconfidence of senior managers and abnormal cash fluctuations, and this relationship was not significant at any level of financial flexibility. Moreover, the findings of the research showed that there was a significant relationship between senior manager’s overconfidence and positive abnormal cash flow fluctuations in firms, and this relationship is significant only at the level of companies with high financial flexibility. Finally, the results indicate that there is no significant relationship between senior managers 'overconfidence and negative cash flow abnormalities, and the relationship between senior managers' overconfidence and negative cash flow fluctuations at the level of companies with high financial flexibility was confirmed.Keywords: abnormal cash fluctuations, overconfidence of senior managers, financial flexibility, accounting
Procedia PDF Downloads 1318262 Effectiveness and Efficiency of Unified Philippines Accident Reporting and Database System in Optimizing Road Crash Data Usage with Various Stakeholders
Authors: Farhad Arian Far, Anjanette Q. Eleazar, Francis Aldrine A. Uy, Mary Joyce Anne V. Uy
Abstract:
The Unified Philippine Accident Reporting and Database System (UPARDS), is a newly developed system by Dr. Francis Aldrine Uy of the Mapua Institute of Technology. The main purpose is to provide an advanced road accident investigation tool, record keeping and analysis system for stakeholders such as Philippine National Police (PNP), Metro Manila Development Authority (MMDA), Department of Public Works and Highways (DPWH), Department of Health (DOH), and insurance companies. The system is composed of 2 components, the mobile application for road accident investigators that takes advantage of available technology to advance data gathering and the web application that integrates all accident data for the use of all stakeholders. The researchers with the cooperation of PNP’s Vehicle Traffic Investigation Sector of the City of Manila, conducted the field-testing of the application in fifteen (15) accident cases. Simultaneously, the researchers also distributed surveys to PNP, Manila Doctors Hospital, and Charter Ping An Insurance Company to gather their insights regarding the web application. The survey was designed on information systems theory called Technology Acceptance Model. The results of the surveys revealed that the respondents were greatly satisfied with the visualization and functions of the applications as it proved to be effective and far more efficient in comparison with the conventional pen-and-paper method. In conclusion, the pilot study was able to address the need for improvement of the current system.Keywords: accident, database, investigation, mobile application, pilot testing
Procedia PDF Downloads 4428261 Development of the Religious Out-Group Aggression Scale
Authors: Rylle Evan Gabriel Zamora, Micah Dennise Malia, Abygail Deniese Villabona
Abstract:
When examining studies on aggression, the studies about individual aggression vastly outnumbers those studies on group aggression. Given the nature of aggression to be violent and cyclical, and the amount violent events that have occurred in the near present, the study of group aggression is relevant now more than ever. This discrepancy is parallel with the number of valid and reliable psychological tests that measure group aggression. Throughout history, one of the biggest causes of group based violence and aggression is religion. This is particularly true within the context of the Philippines as there are a large number of religious groups. Thus, this study aimed to develop a standardized test that measures an individual’s tendency to be aggressive to those who are in conflict with his or her religious beliefs. This study employs a test development design that employs a qualitative phase to ensure the validity of the scale. Thus, the study was divided into three phases. First is a pilot test wherein an instrument was designed from existing literature which was then administered to 173 respondents from the four largest religious groups in the Philippines. After extensive factor analysis and reliability testing, new items were then formed from the qualitative data collected from eight participants, consisting of two individuals per religious group. The final testing integrates all statistically significant items from the first phase, and the newly formed items from the second phase, which was then administered to 200 respondents. The results were then tested further for reliability using Cronbach’s alpha and validity through factor analysis. The items that were proven to be significant were then combined to create a final instrument that may be used by future studies.Keywords: religious aggression, group aggression, test development, psychological assessment, social psychology
Procedia PDF Downloads 2958260 Empirical Decomposition of Time Series of Power Consumption
Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats
Abstract:
Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).Keywords: general appliance model, non intrusive load monitoring, events detection, unsupervised techniques;
Procedia PDF Downloads 828259 Intelligent Grading System of Apple Using Neural Network Arbitration
Authors: Ebenezer Obaloluwa Olaniyi
Abstract:
In this paper, an intelligent system has been designed to grade apple based on either its defective or healthy for production in food processing. This paper is segmented into two different phase. In the first phase, the image processing techniques were employed to extract the necessary features required in the apple. These techniques include grayscale conversion, segmentation where a threshold value is chosen to separate the foreground of the images from the background. Then edge detection was also employed to bring out the features in the images. These extracted features were then fed into the neural network in the second phase of the paper. The second phase is a classification phase where neural network employed to classify the defective apple from the healthy apple. In this phase, the network was trained with back propagation and tested with feed forward network. The recognition rate obtained from our system shows that our system is more accurate and faster as compared with previous work.Keywords: image processing, neural network, apple, intelligent system
Procedia PDF Downloads 3988258 Reinforced Concrete, Problems and Solutions: A Literature Review
Authors: Omar Alhamad, Waleed Eid
Abstract:
Reinforced concrete is a concrete lined with steel so that the materials work together in the resistance forces. Reinforcement rods or mesh are used for tensile, shear, and sometimes intense pressure in a concrete structure. Reinforced concrete is subject to many natural problems or industrial errors. The result of these problems is that it reduces the efficiency of the reinforced concrete or its usefulness. Some of these problems are cracks, earthquakes, high temperatures or fires, as well as corrosion of reinforced iron inside reinforced concrete. There are also factors of ancient buildings or monuments that require some techniques to preserve them. This research presents some general information about reinforced concrete, the pros and cons of reinforced concrete, and then presents a series of literary studies of some of the late published researches on the subject of reinforced concrete and how to preserve it, propose solutions or treatments for the treatment of reinforced concrete problems, raise efficiency and quality for a longer period. These studies have provided advanced and modern methods and techniques in the field of reinforced concrete.Keywords: reinforced concrete, treatment, concrete, corrosion, seismic, cracks
Procedia PDF Downloads 1528257 An Integrated Cognitive Performance Evaluation Framework for Urban Search and Rescue Applications
Authors: Antonio D. Lee, Steven X. Jiang
Abstract:
A variety of techniques and methods are available to evaluate cognitive performance in Urban Search and Rescue (USAR) applications. However, traditional cognitive performance evaluation techniques typically incorporate either the conscious or systematic aspect, failing to take into consideration the subconscious or intuitive aspect. This leads to incomplete measures and produces ineffective designs. In order to fill the gaps in past research, this study developed a theoretical framework to facilitate the integration of situation awareness (SA) and intuitive pattern recognition (IPR) to enhance the cognitive performance representation in USAR applications. This framework provides guidance to integrate both SA and IPR in order to evaluate the cognitive performance of the USAR responders. The application of this framework will help improve the system design.Keywords: cognitive performance, intuitive pattern recognition, situation awareness, urban search and rescue
Procedia PDF Downloads 3288256 Quantitative Characterization of Single Orifice Hydraulic Flat Spray Nozzle
Authors: Y. C. Khoo, W. T. Lai
Abstract:
The single orifice hydraulic flat spray nozzle was evaluated with two global imaging techniques to characterize various aspects of the resulting spray. The two techniques were high resolution flow visualization and Particle Image Velocimetry (PIV). A CCD camera with 29 million pixels was used to capture shadowgraph images to realize ligament formation and collapse as well as droplet interaction. Quantitative analysis was performed to give the sizing information of the droplets and ligaments. This camera was then applied with a PIV system to evaluate the overall velocity field of the spray, from nozzle exit to droplet discharge. PIV images were further post-processed to determine the inclusion angle of the spray. The results from those investigations provided significant quantitative understanding of the spray structure. Based on the quantitative results, detailed understanding of the spray behavior was achieved.Keywords: spray, flow visualization, PIV, shadowgraph, quantitative sizing, velocity field
Procedia PDF Downloads 3828255 Short-Term Physiological Evaluation of Augmented Reality System for Thanatophobia Psychotherapy
Authors: Kais Siala, Mohamed Kharrat, Mohamed Abid
Abstract:
Exposure therapies encourage patients to gradually begin facing their painful memories of the trauma in order to reduce fear and anxiety. In this context, virtual reality techniques are widely used for treatment of different kinds of phobia. The particular case of fear of death phobia (thanataphobia) is addressed in this paper. For this purpose, we propose to make a simulation of Near Death Experience (NDE) using augmented reality techniques. We propose in particular to simulate the Out-of-Body experience (OBE) which is the first step of a Near-Death-Experience (NDE). In this paper, we present technical aspects of this simulation as well as short-term impact in terms of physiological measures. The non-linear Poincéré plot is used to describe the difference in Heart Rate Variability between In-Body and Out-Of-Body conditions.Keywords: Out-of-Body simulation, physiological measure, augmented reality, phobia psychotherapy, HRV, Poincaré plot
Procedia PDF Downloads 3088254 Simulating the Dynamics of E-waste Production from Mobile Phone: Model Development and Case Study of Rwanda
Authors: Rutebuka Evariste, Zhang Lixiao
Abstract:
Mobile phone sales and stocks showed an exponential growth in the past years globally and the number of mobile phones produced each year was surpassing one billion in 2007, this soaring growth of related e-waste deserves sufficient attentions paid to it regionally and globally as long as 40% of its total weight is made from metallic which 12 elements are identified to be highly hazardous and 12 are less harmful. Different research and methods have been used to estimate the obsolete mobile phones but none has developed a dynamic model and handle the discrepancy resulting from improper approach and error in the input data. The study aim was to develop a comprehensive dynamic system model for simulating the dynamism of e-waste production from mobile phone regardless the country or region and prevail over the previous errors. The logistic model method combined with STELLA program has been used to carry out this study. Then the simulation for Rwanda has been conducted and compared with others countries’ results as model testing and validation. Rwanda is about 1.5 million obsoletes mobile phone with 125 tons of waste in 2014 with e-waste production peak in 2017. It is expected to be 4.17 million obsoletes with 351.97 tons by 2020 along with environmental impact intensity of 21times to 2005. Thus, it is concluded through the model testing and validation that the present dynamic model is competent and able deal with mobile phone e-waste production the fact that it has responded to the previous studies questions from Czech Republic, Iran, and China.Keywords: carrying capacity, dematerialization, logistic model, mobile phone, obsolescence, similarity, Stella, system dynamics
Procedia PDF Downloads 3448253 Crashworthiness Optimization of an Automotive Front Bumper in Composite Material
Authors: S. Boria
Abstract:
In the last years, the crashworthiness of an automotive body structure can be improved, since the beginning of the design stage, thanks to the development of specific optimization tools. It is well known how the finite element codes can help the designer to investigate the crashing performance of structures under dynamic impact. Therefore, by coupling nonlinear mathematical programming procedure and statistical techniques with FE simulations, it is possible to optimize the design with reduced number of analytical evaluations. In engineering applications, many optimization methods which are based on statistical techniques and utilize estimated models, called meta-models, are quickly spreading. A meta-model is an approximation of a detailed simulation model based on a dataset of input, identified by the design of experiments (DOE); the number of simulations needed to build it depends on the number of variables. Among the various types of meta-modeling techniques, Kriging method seems to be excellent in accuracy, robustness and efficiency compared to other ones when applied to crashworthiness optimization. Therefore the application of such meta-model was used in this work, in order to improve the structural optimization of a bumper for a racing car in composite material subjected to frontal impact. The specific energy absorption represents the objective function to maximize and the geometrical parameters subjected to some design constraints are the design variables. LS-DYNA codes were interfaced with LS-OPT tool in order to find the optimized solution, through the use of a domain reduction strategy. With the use of the Kriging meta-model the crashworthiness characteristic of the composite bumper was improved.Keywords: composite material, crashworthiness, finite element analysis, optimization
Procedia PDF Downloads 2568252 A Data-Driven Compartmental Model for Dengue Forecasting and Covariate Inference
Authors: Yichao Liu, Peter Fransson, Julian Heidecke, Jonas Wallin, Joacim Rockloev
Abstract:
Dengue, a mosquito-borne viral disease, poses a significant public health challenge in endemic tropical or subtropical countries, including Sri Lanka. To reveal insights into the complexity of the dynamics of this disease and study the drivers, a comprehensive model capable of both robust forecasting and insightful inference of drivers while capturing the co-circulating of several virus strains is essential. However, existing studies mostly focus on only one aspect at a time and do not integrate and carry insights across the siloed approach. While mechanistic models are developed to capture immunity dynamics, they are often oversimplified and lack integration of all the diverse drivers of disease transmission. On the other hand, purely data-driven methods lack constraints imposed by immuno-epidemiological processes, making them prone to overfitting and inference bias. This research presents a hybrid model that combines machine learning techniques with mechanistic modelling to overcome the limitations of existing approaches. Leveraging eight years of newly reported dengue case data, along with socioeconomic factors, such as human mobility, weekly climate data from 2011 to 2018, genetic data detecting the introduction and presence of new strains, and estimates of seropositivity for different districts in Sri Lanka, we derive a data-driven vector (SEI) to human (SEIR) model across 16 regions in Sri Lanka at the weekly time scale. By conducting ablation studies, the lag effects allowing delays up to 12 weeks of time-varying climate factors were determined. The model demonstrates superior predictive performance over a pure machine learning approach when considering lead times of 5 and 10 weeks on data withheld from model fitting. It further reveals several interesting interpretable findings of drivers while adjusting for the dynamics and influences of immunity and introduction of a new strain. The study uncovers strong influences of socioeconomic variables: population density, mobility, household income and rural vs. urban population. The study reveals substantial sensitivity to the diurnal temperature range and precipitation, while mean temperature and humidity appear less important in the study location. Additionally, the model indicated sensitivity to vegetation index, both max and average. Predictions on testing data reveal high model accuracy. Overall, this study advances the knowledge of dengue transmission in Sri Lanka and demonstrates the importance of incorporating hybrid modelling techniques to use biologically informed model structures with flexible data-driven estimates of model parameters. The findings show the potential to both inference of drivers in situations of complex disease dynamics and robust forecasting models.Keywords: compartmental model, climate, dengue, machine learning, social-economic
Procedia PDF Downloads 848251 The Effect of Satisfaction with the Internet on Online Shopping Attitude With TAM Approach Controlled By Gender
Authors: Velly Anatasia
Abstract:
In the last few decades extensive research has been conducted into information technology (IT) adoption, testing a series of factors considered to be essential for improved diffusion. Some studies analyze IT characteristics such as usefulness, ease of use and/or security, others focus on the emotions and experiences of users and a third group attempts to determine the importance of socioeconomic user characteristics such as gender, educational level and income. The situation is similar regarding e-commerce, where the majority of studies have taken for granted the importance of including these variables when studying e-commerce adoption, as these were believed to explain or forecast who buys or who will buy on the internet. Nowadays, the internet has become a marketplace suitable for all ages and incomes and both genders and thus the prejudices linked to the advisability of selling certain products should be revised. The objective of this study is to test whether the socioeconomic characteristics of experienced e-shoppers such as gender rally moderate the effect of their perceptions of online shopping behavior. Current development of the online environment and the experience acquired by individuals from previous e-purchases can attenuate or even nullify the effect of these characteristics. The individuals analyzed are experienced e-shoppers i.e. individuals who often make purchases on the internet. The Technology Acceptance Model (TAM) was broadened to include previous use of the internet and perceived self-efficacy. The perceptions and behavior of e-shoppers are based on their own experiences. The information obtained will be tested using questionnaires which were distributed and self-administered to respondent accustomed using internet. The causal model is estimated using structural equation modeling techniques (SEM), followed by tests of the moderating effect of socioeconomic variables on perceptions and online shopping behavior. The expected findings of this study indicated that gender moderate neither the influence of previous use of the internet nor the perceptions of e-commerce. In short, they do not condition the behavior of the experienced e-shopper.Keywords: Internet shopping, age groups, gender, income, electronic commerce
Procedia PDF Downloads 3378250 An Information-Based Approach for Preference Method in Multi-Attribute Decision Making
Authors: Serhat Tuzun, Tufan Demirel
Abstract:
Multi-Criteria Decision Making (MCDM) is the modelling of real-life to solve problems we encounter. It is a discipline that aids decision makers who are faced with conflicting alternatives to make an optimal decision. MCDM problems can be classified into two main categories: Multi-Attribute Decision Making (MADM) and Multi-Objective Decision Making (MODM), based on the different purposes and different data types. Although various MADM techniques were developed for the problems encountered, their methodology is limited in modelling real-life. Moreover, objective results are hard to obtain, and the findings are generally derived from subjective data. Although, new and modified techniques are developed by presenting new approaches such as fuzzy logic; comprehensive techniques, even though they are better in modelling real-life, could not find a place in real world applications for being hard to apply due to its complex structure. These constraints restrict the development of MADM. This study aims to conduct a comprehensive analysis of preference methods in MADM and propose an approach based on information. For this purpose, a detailed literature review has been conducted, current approaches with their advantages and disadvantages have been analyzed. Then, the approach has been introduced. In this approach, performance values of the criteria are calculated in two steps: first by determining the distribution of each attribute and standardizing them, then calculating the information of each attribute as informational energy.Keywords: literature review, multi-attribute decision making, operations research, preference method, informational energy
Procedia PDF Downloads 2248249 Effect of Relaxation Techniques on Immunological Properties of Breast Milk
Authors: Ahmed Ali Torad
Abstract:
Background: Breast feeding maintains the maternal fetal immunological link, favours the transmission of immune-competence from the mother to her infant and is considered an important contributory factor to the neo natal immune defense system. Purpose: This study was conducted to investigate the effect of relaxation techniques on immunological properties of breast milk. Subjects and Methods: Thirty breast feeding mothers with a single, mature infant without any complications participated in the study. Subjects will be recruited from outpatient clinic of obstetric department of El Kasr El-Aini university hospital in Cairo. Mothers were randomly divided into two equal groups using coin toss method: Group (A) (relaxation training group) (experimental group): It will be composed of 15 women who received relaxation training program in addition to breast feeding and nutritional advices and Group (B) (control group): It will be composed of 15 women who received breast feeding and nutritional advices only. Results: The results showed that mean mother’s age was 28.4 ± 3.68 and 28.07 ± 4.09 for group A and B respectively, there were statistically significant differences between pre and post values regarding cortisol level, IgA level, leucocyte count and infant’s weight and height and there is only statistically significant differences between both groups regarding post values of all immunological variables (cortisol – IgA – leucocyte count). Conclusion: We could conclude that there is a statistically significant effect of relaxation techniques on immunological properties of breast milk.Keywords: relaxation, breast, milk, immunology, lactation
Procedia PDF Downloads 1188248 Classifying Turbomachinery Blade Mode Shapes Using Artificial Neural Networks
Authors: Ismail Abubakar, Hamid Mehrabi, Reg Morton
Abstract:
Currently, extensive signal analysis is performed in order to evaluate structural health of turbomachinery blades. This approach is affected by constraints of time and the availability of qualified personnel. Thus, new approaches to blade dynamics identification that provide faster and more accurate results are sought after. Generally, modal analysis is employed in acquiring dynamic properties of a vibrating turbomachinery blade and is widely adopted in condition monitoring of blades. The analysis provides useful information on the different modes of vibration and natural frequencies by exploring different shapes that can be taken up during vibration since all mode shapes have their corresponding natural frequencies. Experimental modal testing and finite element analysis are the traditional methods used to evaluate mode shapes with limited application to real live scenario to facilitate a robust condition monitoring scheme. For a real time mode shape evaluation, rapid evaluation and low computational cost is required and traditional techniques are unsuitable. In this study, artificial neural network is developed to evaluate the mode shape of a lab scale rotating blade assembly by using result from finite element modal analysis as training data. The network performance evaluation shows that artificial neural network (ANN) is capable of mapping the correlation between natural frequencies and mode shapes. This is achieved without the need of extensive signal analysis. The approach offers advantage from the perspective that the network is able to classify mode shapes and can be employed in real time including simplicity in implementation and accuracy of the prediction. The work paves the way for further development of robust condition monitoring system that incorporates real time mode shape evaluation.Keywords: modal analysis, artificial neural network, mode shape, natural frequencies, pattern recognition
Procedia PDF Downloads 1568247 Comparison Between Fuzzy and P&O Control for MPPT for Photovoltaic System Using Boost Converter
Authors: M. Doumi, A. Miloudi, A. G. Aissaoui, K. Tahir, C. Belfedal, S. Tahir
Abstract:
The studies on the photovoltaic system are extensively increasing because of a large, secure, essentially exhaustible and broadly available resource as a future energy supply. However, the output power induced in the photovoltaic modules is influenced by an intensity of solar cell radiation, temperature of the solar cells and so on. Therefore, to maximize the efficiency of the photovoltaic system, it is necessary to track the maximum power point of the PV array, for this Maximum Power Point Tracking (MPPT) technique is used. Some MPPT techniques are available in that perturbation and observation (P&O) and Fuzzy logic controller (FLC). The fuzzy control method has been compared with perturb and observe (P&O) method as one of the most widely conventional method used in this area. Both techniques have been analyzed and simulated. MPPT using fuzzy logic shows superior performance and more reliable control with respect to the P&O technique for this application.Keywords: photovoltaic system, MPPT, perturb and observe, fuzzy logic
Procedia PDF Downloads 6048246 Performance of the New Laboratory-Based Algorithm for HIV Diagnosis in Southwestern China
Authors: Yanhua Zhao, Chenli Rao, Dongdong Li, Chuanmin Tao
Abstract:
The Chinese Centers for Disease Control and Prevention (CCDC) issued a new laboratory-based algorithm for HIV diagnosis on April 2016, which initially screens with a combination HIV-1/HIV-2 antigen/antibody fourth-generation immunoassay (IA) followed, when reactive, an HIV-1/HIV-2 undifferentiated antibody IA in duplicate. Reactive specimens with concordant results undergo supplemental tests with western blots, or HIV-1 nucleic acid tests (NATs) and non-reactive specimens with discordant results receive HIV-1 NATs or p24 antigen tests or 2-4 weeks follow-up tests. However, little data evaluating the application of the new algorithm have been reported to date. The study was to evaluate the performance of new laboratory-based HIV diagnostic algorithm in an inpatient population of Southwest China over the initial 6 months by compared with the old algorithm. Plasma specimens collected from inpatients from May 1, 2016, to October 31, 2016, are submitted to the laboratory for screening HIV infection performed by both the new HIV testing algorithm and the old version. The sensitivity and specificity of the algorithms and the difference of the categorized numbers of plasmas were calculated. Under the new algorithm for HIV diagnosis, 170 of the total 52 749 plasma specimens were confirmed as positively HIV-infected (0.32%). The sensitivity and specificity of the new algorithm were 100% (170/170) and 100% (52 579/52 579), respectively; while 167 HIV-1 positive specimens were identified by the old algorithm with sensitivity 98.24% (167/170) and 100% (52 579/52 579), respectively. Three acute HIV-1 infections (AHIs) and two early HIV-1 infections (EHIs) were identified by the new algorithm; the former was missed by old procedure. Compared with the old version, the new algorithm produced fewer WB-indeterminate results (2 vs. 16, p = 0.001), which led to fewer follow-up tests. Therefore, the new HIV testing algorithm is more sensitive for detecting acute HIV-1 infections with maintaining the ability to verify the established HIV-1 infections and can dramatically decrease the greater number of WB-indeterminate specimens.Keywords: algorithm, diagnosis, HIV, laboratory
Procedia PDF Downloads 4018245 Synchronous Reference Frame and Instantaneous P-Q Theory Based Control of Unified Power Quality Conditioner for Power Quality Improvement of Distribution System
Authors: Ambachew Simreteab Gebremedhn
Abstract:
Context: The paper explores the use of synchronous reference frame theory (SRFT) and instantaneous reactive power theory (IRPT) based control of Unified Power Quality Conditioner (UPQC) for improving power quality in distribution systems. Research Aim: To investigate the performance of different control configurations of UPQC using SRFT and IRPT for mitigating power quality issues in distribution systems. Methodology: The study compares three control techniques (SRFT-IRPT, SRFT-SRFT, IRPT-IRPT) implemented in series and shunt active filters of UPQC. Data is collected under various control algorithms to analyze UPQC performance. Findings: Results indicate the effectiveness of SRFT and IRPT based control techniques in addressing power quality problems such as voltage sags, swells, unbalance, harmonics, and current harmonics in distribution systems. Theoretical Importance: The study provides insights into the application of SRFT and IRPT in improving power quality, specifically in mitigating unbalanced voltage sags, where conventional methods fall short. Data Collection: Data is collected under various control algorithms using simulation in MATLAB Simulink and real-time operation executed with experimental results obtained using RT-LAB. Analysis Procedures: Performance analysis of UPQC under different control algorithms is conducted to evaluate the effectiveness of SRFT and IRPT based control techniques in mitigating power quality issues. Questions Addressed: How do SRFT and IRPT based control techniques compare in improving power quality in distribution systems? What is the impact of using different control configurations on the performance of UPQC? Conclusion: The study demonstrates the efficacy of SRFT and IRPT based control of UPQC in mitigating power quality issues in distribution systems, highlighting their potential for enhancing voltage and current quality.Keywords: power quality, UPQC, shunt active filter, series active filter, non-linear load, RT-LAB, MATLAB
Procedia PDF Downloads 108244 Dynamic Bandwidth Allocation in Fiber-Wireless (FiWi) Networks
Authors: Eman I. Raslan, Haitham S. Hamza, Reda A. El-Khoribi
Abstract:
Fiber-Wireless (FiWi) networks are a promising candidate for future broadband access networks. These networks combine the optical network as the back end where different passive optical network (PON) technologies are realized and the wireless network as the front end where different wireless technologies are adopted, e.g. LTE, WiMAX, Wi-Fi, and Wireless Mesh Networks (WMNs). The convergence of both optical and wireless technologies requires designing architectures with robust efficient and effective bandwidth allocation schemes. Different bandwidth allocation algorithms have been proposed in FiWi networks aiming to enhance the different segments of FiWi networks including wireless and optical subnetworks. In this survey, we focus on the differentiating between the different bandwidth allocation algorithms according to their enhancement segment of FiWi networks. We classify these techniques into wireless, optical and Hybrid bandwidth allocation techniques.Keywords: fiber-wireless (FiWi), dynamic bandwidth allocation (DBA), passive optical networks (PON), media access control (MAC)
Procedia PDF Downloads 5318243 Annexing the Strength of Information and Communication Technology (ICT) for Real-time TB Reporting Using TB Situation Room (TSR) in Nigeria: Kano State Experience
Authors: Ibrahim Umar, Ashiru Rajab, Sumayya Chindo, Emmanuel Olashore
Abstract:
INTRODUCTION: Kano is the most populous state in Nigeria and one of the two states with the highest TB burden in the country. The state notifies an average of 8,000+ TB cases quarterly and has the highest yearly notification of all the states in Nigeria from 2020 to 2022. The contribution of the state TB program to the National TB notification varies from 9% to 10% quarterly between the first quarter of 2022 and second quarter of 2023. The Kano State TB Situation Room is an innovative platform for timely data collection, collation and analysis for informed decision in health system. During the 2023 second National TB Testing week (NTBTW) Kano TB program aimed at early TB detection, prevention and treatment. The state TB Situation room provided avenue to the state for coordination and surveillance through real time data reporting, review, analysis and use during the NTBTW. OBJECTIVES: To assess the role of innovative information and communication technology platform for real-time TB reporting during second National TB Testing week in Nigeria 2023. To showcase the NTBTW data cascade analysis using TSR as innovative ICT platform. METHODOLOGY: The State TB deployed a real-time virtual dashboard for NTBTW reporting, analysis and feedback. A data room team was set up who received realtime data using google link. Data received was analyzed using power BI analytic tool with statistical alpha level of significance of <0.05. RESULTS: At the end of the week-long activity and using the real-time dashboard with onsite mentorship of the field workers, the state TB program was able to screen a total of 52,054 people were screened for TB from 72,112 individuals eligible for screening (72% screening rate). A total of 9,910 presumptive TB clients were identified and evaluated for TB leading to diagnosis of 445 TB patients with TB (5% yield from presumptives) and placement of 435 TB patients on treatment (98% percentage enrolment). CONCLUSION: The TB Situation Room (TBSR) has been a great asset to Kano State TB Control Program in meeting up with the growing demand for timely data reporting in TB and other global health responses. The use of real time surveillance data during the 2023 NTBTW has in no small measure improved the TB response and feedback in Kano State. Scaling up this intervention to other disease areas, states and nations is a positive step in the right direction towards global TB eradication.Keywords: tuberculosis (tb), national tb testing week (ntbtw), tb situation rom (tsr), information communication technology (ict)
Procedia PDF Downloads 718242 Iris Cancer Detection System Using Image Processing and Neural Classifier
Authors: Abdulkader Helwan
Abstract:
Iris cancer, so called intraocular melanoma is a cancer that starts in the iris; the colored part of the eye that surrounds the pupil. There is a need for an accurate and cost-effective iris cancer detection system since the available techniques used currently are still not efficient. The combination of the image processing and artificial neural networks has a great efficiency for the diagnosis and detection of the iris cancer. Image processing techniques improve the diagnosis of the cancer by enhancing the quality of the images, so the physicians diagnose properly. However, neural networks can help in making decision; whether the eye is cancerous or not. This paper aims to develop an intelligent system that stimulates a human visual detection of the intraocular melanoma, so called iris cancer. The suggested system combines both image processing techniques and neural networks. The images are first converted to grayscale, filtered, and then segmented using prewitt edge detection algorithm to detect the iris, sclera circles and the cancer. The principal component analysis is used to reduce the image size and for extracting features. Those features are considered then as inputs for a neural network which is capable of deciding if the eye is cancerous or not, throughout its experience adopted by many training iterations of different normal and abnormal eye images during the training phase. Normal images are obtained from a public database available on the internet, “Mile Research”, while the abnormal ones are obtained from another database which is the “eyecancer”. The experimental results for the proposed system show high accuracy 100% for detecting cancer and making the right decision.Keywords: iris cancer, intraocular melanoma, cancerous, prewitt edge detection algorithm, sclera
Procedia PDF Downloads 5038241 Use of Multivariate Statistical Techniques for Water Quality Monitoring Network Assessment, Case of Study: Jequetepeque River Basin
Authors: Jose Flores, Nadia Gamboa
Abstract:
A proper water quality management requires the establishment of a monitoring network. Therefore, evaluation of the efficiency of water quality monitoring networks is needed to ensure high-quality data collection of critical quality chemical parameters. Unfortunately, in some Latin American countries water quality monitoring programs are not sustainable in terms of recording historical data or environmentally representative sites wasting time, money and valuable information. In this study, multivariate statistical techniques, such as principal components analysis (PCA) and hierarchical cluster analysis (HCA), are applied for identifying the most significant monitoring sites as well as critical water quality parameters in the monitoring network of the Jequetepeque River basin, in northern Peru. The Jequetepeque River basin, like others in Peru, shows socio-environmental conflicts due to economical activities developed in this area. Water pollution by trace elements in the upper part of the basin is mainly related with mining activity, and agricultural land lost due to salinization is caused by the extensive use of groundwater in the lower part of the basin. Since the 1980s, the water quality in the basin has been non-continuously assessed by public and private organizations, and recently the National Water Authority had established permanent water quality networks in 45 basins in Peru. Despite many countries use multivariate statistical techniques for assessing water quality monitoring networks, those instruments have never been applied for that purpose in Peru. For this reason, the main contribution of this study is to demonstrate that application of the multivariate statistical techniques could serve as an instrument that allows the optimization of monitoring networks using least number of monitoring sites as well as the most significant water quality parameters, which would reduce costs concerns and improve the water quality management in Peru. Main socio-economical activities developed and the principal stakeholders related to the water management in the basin are also identified. Finally, water quality management programs will also be discussed in terms of their efficiency and sustainability.Keywords: PCA, HCA, Jequetepeque, multivariate statistical
Procedia PDF Downloads 3558240 AI-Powered Models for Real-Time Fraud Detection in Financial Transactions to Improve Financial Security
Authors: Shanshan Zhu, Mohammad Nasim
Abstract:
Financial fraud continues to be a major threat to financial institutions across the world, causing colossal money losses and undermining public trust. Fraud prevention techniques, based on hard rules, have become ineffective due to evolving patterns of fraud in recent times. Against such a background, the present study probes into distinct methodologies that exploit emergent AI-driven techniques to further strengthen fraud detection. We would like to compare the performance of generative adversarial networks and graph neural networks with other popular techniques, like gradient boosting, random forests, and neural networks. To this end, we would recommend integrating all these state-of-the-art models into one robust, flexible, and smart system for real-time anomaly and fraud detection. To overcome the challenge, we designed synthetic data and then conducted pattern recognition and unsupervised and supervised learning analyses on the transaction data to identify which activities were fishy. With the use of actual financial statistics, we compare the performance of our model in accuracy, speed, and adaptability versus conventional models. The results of this study illustrate a strong signal and need to integrate state-of-the-art, AI-driven fraud detection solutions into frameworks that are highly relevant to the financial domain. It alerts one to the great urgency that banks and related financial institutions must rapidly implement these most advanced technologies to continue to have a high level of security.Keywords: AI-driven fraud detection, financial security, machine learning, anomaly detection, real-time fraud detection
Procedia PDF Downloads 428239 Auteur 3D Filmmaking: From Hitchcock’s Protrusion Technique to Godard’s Immersion Aesthetic
Authors: Delia Enyedi
Abstract:
Throughout film history, the regular return of 3D cinema has been discussed in connection to crises caused by the advent of television or the competition of the Internet. In addition, the three waves of stereoscopic 3D (from 1952 up to 1983) and its current digital version have been blamed for adding a challenging technical distraction to the viewing experience. By discussing the films Dial M for Murder (1954) and Goodbye to Language (2014), the paper aims to analyze the response of recognized auteurs to the use of 3D techniques in filmmaking. For Alfred Hitchcock, the solution to attaining perceptual immersion paradoxically resided in restraining the signature effect of 3D, namely protrusion. In Jean-Luc Godard’s vision, 3D techniques allowed him to explore perceptual absorption by means of depth of field, for which he had long advocated as being central to cinema. Thus, both directors contribute to the foundation of an auteur aesthetic in 3D filmmaking.Keywords: Alfred Hitchcock, authorship, 3D filmmaking, Jean-Luc Godard, perceptual absorption, perceptual immersion
Procedia PDF Downloads 290