Search results for: foreign real estate investment
1392 A Conceptual Framework for Vulnerability Assessment of Climate Change Impact on Oil and Gas Critical Infrastructures in the Niger Delta
Authors: Justin A. Udie, Subhes C. Bhatthacharyya, Leticia Ozawa-Meida
Abstract:
The impact of climate change is severe in the Niger Delta and critical oil and gas infrastructures are vulnerable. This is partly due to lack of specific impact assessment framework to assess impact indices on both existing and new infrastructures. The purpose of this paper is to develop a framework for the assessment of climate change impact on critical oil and gas infrastructure in the region. Comparative and documentary methods as well as analysis of frameworks were used to develop a flexible, integrated and conceptual four dimensional framework underpinning; 1. Scoping – the theoretical identification of inherent climate burdens, review of exposure, adaptive capacities and delineation of critical infrastructure; 2. Vulnerability assessment – presents a systematic procedure for the assessment of infrastructure vulnerability. It provides real time re-scoping, practical need for data collection, analysis and review. Physical examination of systems is encouraged to complement the scoped data and ascertain the level of exposure to relevant climate risks in the area; 3. New infrastructure – consider infrastructures that are still at developmental level. It seeks to suggest the inclusion of flexible adaptive capacities in original design of infrastructures in line with climate threats and projections; 4. The Mainstreaming Climate Impact Assessment into government’s environmental decision making approach. Though this framework is designed specifically for the estimation of exposure, adaptive capacities and criticality of vulnerable oil and gas infrastructures in the Niger Delta to climate burdens; it is recommended for researchers and experts as a first-hand generic and practicable tool which can be used for the assessment of other infrastructures perceived as critical and vulnerable. The paper does not provide further tools that synch into the methodological approach but presents pointers upon which a pragmatic methodology can be developed.Keywords: adaptation, assessment, conceptual, climate, change, framework, vulnerability
Procedia PDF Downloads 3171391 Transit-Oriented Development as a Tool for Building Social Capital
Authors: Suneet Jagdev
Abstract:
Rapid urbanization has resulted in informal settlements on the periphery of nearly all big cities in the developing world due to lack of affordable housing options in the city. Residents of these communities have to travel long distances to get to work or search for jobs in these cities, and women, children and elderly people are excluded from urban opportunities. Affordable and safe public transport facilities can help them expand their possibilities. The aim of this research is to identify social capital as another important element of livable cities that can be protected and nurtured through transit-oriented development, as a tool to provide real resources that can help these transit-oriented communities become self-sustainable. Social capital has been referred to the collective value of all social networks and the inclinations that arise from these networks to do things for each other. It is one of the key component responsible to build and maintain democracy. Public spaces, pedestrian amenities and social equity are the other essential part of Transit Oriented Development models that will be analyzed in this research. The data has been collected through the analysis of several case studies, the urban design strategies implemented and their impact on the perception and on the community´s experience, and, finally, how these focused on the social capital. Case studies have been evaluated on several metrics, namely ecological, financial, energy consumption, etc. A questionnaire and other tools were designed to collect data to analyze the research objective and reflect the dimension of social capital. The results of the questionnaire indicated that almost all the participants have a positive attitude towards this dimensions of building a social capital with the aid of transit-oriented development. Statistical data of the identified key motivators against against demographic characteristics have been generated based on the case studies used for the paper. The findings suggested that there is a direct relation between urbanization, transit-oriented developments, and social capital.Keywords: better opportunities, low-income settlements, social capital, social inclusion, transit oriented development
Procedia PDF Downloads 3311390 Associations of Vitamin D Receptor Polymorphisms with Coronary Artery Diseases
Authors: Elham Sharif, Nasser Rizk, Sirin Abu Aqel, Ofelia Masoud
Abstract:
Background: Previous studies have investigated the association of rs1544410, rs7975232 and rs731236 polymorphisms in vitamin D receptor gene and its impact on diseases such as cancer, diabetes and hypertension in different ethnic backgrounds. Aim: The aim of this study is to investigate the association between VDR polymorphisms using three SNP’s (rs1544410, rs7975232 and rs731236) and the severity of the significant lesion in coronary arteries among angiographically diagnosed CAD. Methods: A prospective-retrospective study was conducted on 192 CAD patients enrolled from the cardiology department-Heart Hospital HMC, grouped in 96 subjects with significant stenosis and 96 with non-significant stenosis with a mean age between 30 and 75 years old. Genotyping was performed for the following SNPs rs1544410, rs7975232 and rs731236 using TaqMan assay by the Real Time PCR, ABI 7500 in Health Sciences Labs at Qatar University Biomedical Research Center. Results: The results showed that both groups have matched age and gender distribution but patients with the significant stenosis have significantly higher; BMI (p=0.047); smoking status (p=0.039); FBS (p= 0.031); CK-MB (p=0.025) and Troponin (p=0.002) than the patients with non–significant lesion. Among the traditional risk factors, smoking increases the odds of the severe stenotic lesion in CAD patients by 1.984, with 95% CI between 1.024 – 7.063, with p= 0.042.HWE showed deviations of the rs1544410 and rs731236 among the study subjects. The most frequent genotype in distribution of rs7975232 is the AA among the significant stenosis patients, while the heterozygous AC was the frequent genotype in distribution among the non-significant stenosis group. The carriers of CC genotype in rs7975232 increased the risk of having significant coronary arteries stenotic lesion by 1.83 with 95% CI (1.020 – 3.280), p=0.043. No association was found between the rs7975232 with vitamin D and VDBP. Conclusion: There is a significant association between rs7975232 and the severity of CAD lesion. The carrier of CC genotype in rs7975232 increased the risk of having significant coronary arteries atherosclerotic lesion especially in patients with smoking history independent of vitamin D.Keywords: vitamin D, vitamin D receptor, polymorphism, coronary harat disease
Procedia PDF Downloads 3131389 Symbolic Status of Architectural Identity: Example of Famagusta Walled City
Authors: Rafooneh Mokhtarshahi Sani
Abstract:
This study explores how the residents of a conserved urban area have used goods and ideas as resources to maintain an enviable architectural identity. Whereas conserved urban quarters are seen as role model for maintaining architectural identity, the article describes how their residents try to give a contemporary modern image to their homes. It is argued that despite the efforts of authorities and decision makers to keep and preserve the traditional architectural identity in conserved urban areas, people have already moved on and have adjusted their homes with their preferred architectural taste. Being through such conflict of interests, have put the future of architectural identity in such places at risk. The thesis is that, on the one hand, such struggle over a desirable symbolic status in identity formation is taking place, and, on the other, it is continuously widening the gap between the real and ideal identity in the built environment. The study then analytically connects the concept of symbolic status to current identity debates. As an empirical research, this study uses systematic social and physical observation methods to describe and categorize the characteristics of settlements in Walled City of Famagusta, which symbolically represent the modern houses. The Walled City is a cultural heritage site, which most of its urban context has been conserved. Traditional houses in this area demonstrate the identity of North Cyprus architecture. The conserved residential buildings, however, either has been abandoned or went through changes by their users to present the ideal image of contemporary life. In the concluding section, the article discusses the differences between the symbolic status of people and authorities in defining a culturally valuable contemporary home. And raises the question of whether we can talk at all about architectural identity in terms of conserving the traditional style, and how we may do so on the basis of dynamic nature of identity and the necessity of its acceptance by the users.Keywords: symbolic status, architectural identity, conservation, facades, Famagusta walled city
Procedia PDF Downloads 3561388 Reimagining the Management of Telco Supply Chain with Blockchain
Authors: Jeaha Yang, Ahmed Khan, Donna L. Rodela, Mohammed A. Qaudeer
Abstract:
Traditional supply chain silos still exist today due to the difficulty of establishing trust between various partners and technological barriers across industries. Companies lose opportunities and revenue and inadvertently make poor business decisions resulting in further challenges. Blockchain technology can bring a new level of transparency through sharing information with a distributed ledger in a decentralized manner that creates a basis of trust for business. Blockchain is a loosely coupled, hub-style communication network in which trading partners can work indirectly with each other for simpler integration, but they work together through the orchestration of their supply chain operations under a coherent process that is developed jointly. A Blockchain increases efficiencies, lowers costs, and improves interoperability to strengthen and automate the supply chain management process while all partners share the risk. Blockchain ledger is built to track inventory lifecycle for supply chain transparency and keeps a journal of inventory movement for real-time reconciliation. State design patterns are used to capture the life cycle (behavior) of inventory management as a state machine for a common, transparent and coherent process which creates an opportunity for trading partners to become more responsive in terms of changes or improvements in process, reconcile discrepancies, and comply with internal governance and external regulations. It enables end-to-end, inter-company visibility at the unit level for more accurate demand planning with better insight into order fulfillment and replenishment.Keywords: supply chain management, inventory trace-ability, perpetual inventory system, inventory lifecycle, blockchain, inventory consignment, supply chain transparency, digital thread, demand planning, hyper ledger fabric
Procedia PDF Downloads 901387 A Review on Applications of Evolutionary Algorithms to Reservoir Operation for Hydropower Production
Authors: Nkechi Neboh, Josiah Adeyemo, Abimbola Enitan, Oludayo Olugbara
Abstract:
Evolutionary algorithms are techniques extensively used in the planning and management of water resources and systems. It is useful in finding optimal solutions to water resources problems considering the complexities involved in the analysis. River basin management is an essential area that involves the management of upstream, river inflow and outflow including downstream aspects of a reservoir. Water as a scarce resource is needed by human and the environment for survival and its management involve a lot of complexities. Management of this scarce resource is necessary for proper distribution to competing users in a river basin. This presents a lot of complexities involving many constraints and conflicting objectives. Evolutionary algorithms are very useful in solving this kind of complex problems with ease. Evolutionary algorithms are easy to use, fast and robust with many other advantages. Many applications of evolutionary algorithms, which are population based search algorithm, are discussed. Different methodologies involved in the modeling and simulation of water management problems in river basins are explained. It was found from this work that different evolutionary algorithms are suitable for different problems. Therefore, appropriate algorithms are suggested for different methodologies and applications based on results of previous studies reviewed. It is concluded that evolutionary algorithms, with wide applications in water resources management, are viable and easy algorithms for most of the applications. The results suggested that evolutionary algorithms, applied in the right application areas, can suggest superior solutions for river basin management especially in reservoir operations, irrigation planning and management, stream flow forecasting and real-time applications. The future directions in this work are suggested. This study will assist decision makers and stakeholders on the best evolutionary algorithm to use in varied optimization issues in water resources management.Keywords: evolutionary algorithm, multi-objective, reservoir operation, river basin management
Procedia PDF Downloads 4911386 Rare Differential Diagnostic Dilemma
Authors: Angelis P. Barlampas
Abstract:
Theoretical background Disorders of fixation and rotation of the large intestine, result in the existence of its parts in ectopic anatomical positions. In case of symptomatology, the clinical picture is complicated by the possible symptomatology of the neighboring anatomical structures and a differential diagnostic problem arises. Target The purpose of this work is to demonstrate the difficulty of revealing the real cause of abdominal pain, in cases of anatomical variants and the decisive contribution of imaging and especially that of computed tomography. Methods A patient came to the emergency room, because of acute pain in the right hypochondrium. Clinical examination revealed tenderness in the gallbladder area and a positive Murphy's sign. An ultrasound exam depicted a normal gallbladder and the patient was referred for a CT scan. Results Flexible, unfixed ascending colon and cecum, located in the anatomical region of the right mesentery. Opacities of the surrounding peritoneal fat and a small linear concentration of fluid can be seen. There was an appendix of normal anteroposterior diameter with the presence of air in its lumen and without clear signs of inflammation. There was an impression of possible inflammatory swelling at the base of the appendix, (DD phenomenon of partial volume; e.t.c.). Linear opacities of the peritoneal fat in the region of the second loop of the duodenum. Multiple diverticula throughout the colon. Differential Diagnosis The differential diagnosis includes the following: Inflammation of the base of the appendix, diverticulitis of the cecum-ascending colon, a rare case of second duodenal loop ulcer, tuberculosis, terminal ileitis, pancreatitis, torsion of unfixed cecum-ascending colon, embolism or thrombosis of a vascular intestinal branch. Final Diagnosis There is an unfixed cecum-ascending colon, which is exhibiting diverticulitis.Keywords: unfixed cecum-ascending colon, abdominal pain, malrotation, abdominal CT, congenital anomalies
Procedia PDF Downloads 571385 Investigation of the Jupiter’s Galilean Moons
Authors: Revaz Chigladze
Abstract:
The purpose of the research is to investigate the surfaces of Jupiter's Galilean moons, namely which moon has the most uniform surface among them, what is the difference between the front (in the direction of motion) and the back sides of each moon's surface, as well as the temporal variations of the moons. Since 1981, the E. Kharadze National Astrophysical Observatory of Georgia has been conducting polarimetric (P) and photometric (M) observations of Jupiter's Galilean moons with telescopes of different diameters (40 cm and 125 cm) and the polarimeter ASEP-78 in combination with them and the latest generation photometer with a polarimeter and modern light receiver SBIG. As it turns out from the analysis of the observed material, the parameters P and M depend on α-the phase angle of the moon (satellite), L- the orbital latitude of the moon (satellite), λ- the wavelength, and t - the period of observation, i.e., P = P (α, L, λ , t), and similarly M = M (α, L, λ. , t). Based on the analysis of the observed material, the following was studied: Jupiter's Galilean moons: dependence of the magnitude and phase angle of the degree of linear polarization for different wavelengths; Dependence of the degree of polarization and the orbital longitude; dependence between the magnitude of the degree of polarization and the wavelength; time dependence of the degree of polarization and the dependence between photometric and polarimetric characteristics (including establishing correlation). From the analysis of the obtained results, we get: The magnitude of the degree of polarization of Jupiter's Galilean moons near the opposition significantly differs from zero. Europa appears to have the most uniform surface, and Callisto the least uniform. Time variations are most characteristic of Io, which confirms the presence of volcanic activity on its surface. Based on the observed material, it can be seen that the intensity of light reflected from the front hemisphere of the first three moons: Io, Europa, and Ganymede, is less than the intensity of light reflected from the rear hemisphere, and in the case of the Callisto it is the opposite. The paper provides a convincing (natural, real) explanation of this fact.Keywords: Galilean moons, polarization, degree of polarization, photometry, front and rear hemispheres
Procedia PDF Downloads 1011384 Lignin Pyrolysis to Value-Added Chemicals: A Mechanistic Approach
Authors: Binod Shrestha, Sandrine Hoppe, Thierry Ghislain, Phillipe Marchal, Nicolas Brosse, Anthony Dufour
Abstract:
The thermochemical conversion of lignin has received an increasing interest in the frame of different biorefinery concepts for the production of chemicals or energy. It is needed to better understand the physical and chemical conversion of lignin for feeder and reactor designs. In-situ rheology reveals the viscoelastic behaviour of lignin upon thermal conversion. The softening, re-solidification (char formation), swelling and shrinking behaviours are quantified during pyrolysis in real-time [1]. The in-situ rheology of an alkali lignin (Protobind 1000) was conducted in high torque controlled strain rheometer from 35°C to 400°C with a heating rate of 5°C.min-1. The swelling, through glass phase transition overlapped with depolymerization, and solidification (crosslinking and “char” formation) are two main phenomena observed during lignin pyrolysis. The onset of temperatures for softening and solidification for this lignin has been found to be 141°C and 248°C respectively. An ex-situ characterization of lignin/char residues obtained at different temperatures after quenching in the rheometer gives a clear understanding of the pathway of lignin degradation. The lignin residues were sampled from the mid-point temperatures of the softening range and solidification range to study the chemical transformations undergoing. Elemental analysis, FTIR and solid state NMR were conducted after quenching the solid residues (lignin/char). The quenched solid was also extracted by suitable solvent and followed by acetylation and GPC-UV analysis. The combination of 13C NMR and GPC-UV reveals the depolymerization followed by crosslinking of lignin/char. NMR and FTIR provide the evolution of functional moieties upon temperature. Physical and chemical mechanisms occurring during lignin pyrolysis are accounted in this study. Thanks to all these complementary methods.Keywords: pyrolysis, bio-chemicals, valorization, mechanism, softening, solidification, cross linking, rheology, spectroscopic methods
Procedia PDF Downloads 4241383 Implementation of Fuzzy Version of Block Backward Differentiation Formulas for Solving Fuzzy Differential Equations
Authors: Z. B. Ibrahim, N. Ismail, K. I. Othman
Abstract:
Fuzzy Differential Equations (FDEs) play an important role in modelling many real life phenomena. The FDEs are used to model the behaviour of the problems that are subjected to uncertainty, vague or imprecise information that constantly arise in mathematical models in various branches of science and engineering. These uncertainties have to be taken into account in order to obtain a more realistic model and many of these models are often difficult and sometimes impossible to obtain the analytic solutions. Thus, many authors have attempted to extend or modified the existing numerical methods developed for solving Ordinary Differential Equations (ODEs) into fuzzy version in order to suit for solving the FDEs. Therefore, in this paper, we proposed the development of a fuzzy version of three-point block method based on Block Backward Differentiation Formulas (FBBDF) for the numerical solution of first order FDEs. The three-point block FBBDF method are implemented in uniform step size produces three new approximations simultaneously at each integration step using the same back values. Newton iteration of the FBBDF is formulated and the implementation is based on the predictor and corrector formulas in the PECE mode. For greater efficiency of the block method, the coefficients of the FBBDF are stored at the start of the program. The proposed FBBDF is validated through numerical results on some standard problems found in the literature and comparisons are made with the existing fuzzy version of the Modified Simpson and Euler methods in terms of the accuracy of the approximated solutions. The numerical results show that the FBBDF method performs better in terms of accuracy when compared to the Euler method when solving the FDEs.Keywords: block, backward differentiation formulas, first order, fuzzy differential equations
Procedia PDF Downloads 3191382 Deterministic and Stochastic Modeling of a Micro-Grid Management for Optimal Power Self-Consumption
Authors: D. Calogine, O. Chau, S. Dotti, O. Ramiarinjanahary, P. Rasoavonjy, F. Tovondahiniriko
Abstract:
Mafate is a natural circus in the north-western part of Reunion Island, without an electrical grid and road network. A micro-grid concept is being experimented in this area, composed of a photovoltaic production combined with electrochemical batteries, in order to meet the local population for self-consumption of electricity demands. This work develops a discrete model as well as a stochastic model in order to reach an optimal equilibrium between production and consumptions for a cluster of houses. The management of the energy power leads to a large linearized programming system, where the time interval of interest is 24 hours The experimental data are solar production, storage energy, and the parameters of the different electrical devices and batteries. The unknown variables to evaluate are the consumptions of the various electrical services, the energy drawn from and stored in the batteries, and the inhabitants’ planning wishes. The objective is to fit the solar production to the electrical consumption of the inhabitants, with an optimal use of the energies in the batteries by satisfying as widely as possible the users' planning requirements. In the discrete model, the different parameters and solutions of the linear programming system are deterministic scalars. Whereas in the stochastic approach, the data parameters and the linear programming solutions become random variables, then the distributions of which could be imposed or established by estimation from samples of real observations or from samples of optimal discrete equilibrium solutions.Keywords: photovoltaic production, power consumption, battery storage resources, random variables, stochastic modeling, estimations of probability distributions, mixed integer linear programming, smart micro-grid, self-consumption of electricity.
Procedia PDF Downloads 1101381 Large-Scale Simulations of Turbulence Using Discontinuous Spectral Element Method
Authors: A. Peyvan, D. Li, J. Komperda, F. Mashayek
Abstract:
Turbulence can be observed in a variety fluid motions in nature and industrial applications. Recent investment in high-speed aircraft and propulsion systems has revitalized fundamental research on turbulent flows. In these systems, capturing chaotic fluid structures with different length and time scales is accomplished through the Direct Numerical Simulation (DNS) approach since it accurately simulates flows down to smallest dissipative scales, i.e., Kolmogorov’s scales. The discontinuous spectral element method (DSEM) is a high-order technique that uses spectral functions for approximating the solution. The DSEM code has been developed by our research group over the course of more than two decades. Recently, the code has been improved to run large cases in the order of billions of solution points. Running big simulations requires a considerable amount of RAM. Therefore, the DSEM code must be highly parallelized and able to start on multiple computational nodes on an HPC cluster with distributed memory. However, some pre-processing procedures, such as determining global element information, creating a global face list, and assigning global partitioning and element connection information of the domain for communication, must be done sequentially with a single processing core. A separate code has been written to perform the pre-processing procedures on a local machine. It stores the minimum amount of information that is required for the DSEM code to start in parallel, extracted from the mesh file, into text files (pre-files). It packs integer type information with a Stream Binary format in pre-files that are portable between machines. The files are generated to ensure fast read performance on different file-systems, such as Lustre and General Parallel File System (GPFS). A new subroutine has been added to the DSEM code to read the startup files using parallel MPI I/O, for Lustre, in a way that each MPI rank acquires its information from the file in parallel. In case of GPFS, in each computational node, a single MPI rank reads data from the file, which is specifically generated for the computational node, and send them to other ranks on the node using point to point non-blocking MPI communication. This way, communication takes place locally on each node and signals do not cross the switches of the cluster. The read subroutine has been tested on Argonne National Laboratory’s Mira (GPFS), National Center for Supercomputing Application’s Blue Waters (Lustre), San Diego Supercomputer Center’s Comet (Lustre), and UIC’s Extreme (Lustre). The tests showed that one file per node is suited for GPFS and parallel MPI I/O is the best choice for Lustre file system. The DSEM code relies on heavily optimized linear algebra operation such as matrix-matrix and matrix-vector products for calculation of the solution in every time-step. For this, the code can either make use of its matrix math library, BLAS, Intel MKL, or ATLAS. This fact and the discontinuous nature of the method makes the DSEM code run efficiently in parallel. The results of weak scaling tests performed on Blue Waters showed a scalable and efficient performance of the code in parallel computing.Keywords: computational fluid dynamics, direct numerical simulation, spectral element, turbulent flow
Procedia PDF Downloads 1331380 A Methodology for Seismic Performance Enhancement of RC Structures Equipped with Friction Energy Dissipation Devices
Authors: Neda Nabid
Abstract:
Friction-based supplemental devices have been extensively used for seismic protection and strengthening of structures, however, the conventional use of these dampers may not necessarily lead to an efficient structural performance. Conventionally designed friction dampers follow a uniform height-wise distribution pattern of slip load values for more practical simplicity. This can lead to localizing structural damage in certain story levels, while the other stories accommodate a negligible amount of relative displacement demand. A practical performance-based optimization methodology is developed to tackle with structural damage localization of RC frame buildings with friction energy dissipation devices under severe earthquakes. The proposed methodology is based on the concept of uniform damage distribution theory. According to this theory, the slip load values of the friction dampers redistribute and shift from stories with lower relative displacement demand to the stories with higher inter-story drifts to narrow down the discrepancy between the structural damage levels in different stories. In this study, the efficacy of the proposed design methodology is evaluated through the seismic performance of five different low to high-rise RC frames equipped with friction wall dampers under six real spectrum-compatible design earthquakes. The results indicate that compared to the conventional design, using the suggested methodology to design friction wall systems can lead to, by average, up to 40% reduction of maximum inter-story drift; and incredibly more uniform height-wise distribution of relative displacement demands under the design earthquakes.Keywords: friction damper, nonlinear dynamic analysis, RC structures, seismic performance, structural damage
Procedia PDF Downloads 2261379 A Novel Method for Face Detection
Authors: H. Abas Nejad, A. R. Teymoori
Abstract:
Facial expression recognition is one of the open problems in computer vision. Robust neutral face recognition in real time is a major challenge for various supervised learning based facial expression recognition methods. This is due to the fact that supervised methods cannot accommodate all appearance variability across the faces with respect to race, pose, lighting, facial biases, etc. in the limited amount of training data. Moreover, processing each and every frame to classify emotions is not required, as the user stays neutral for the majority of the time in usual applications like video chat or photo album/web browsing. Detecting neutral state at an early stage, thereby bypassing those frames from emotion classification would save the computational power. In this work, we propose a light-weight neutral vs. emotion classification engine, which acts as a preprocessor to the traditional supervised emotion classification approaches. It dynamically learns neutral appearance at Key Emotion (KE) points using a textural statistical model, constructed by a set of reference neutral frames for each user. The proposed method is made robust to various types of user head motions by accounting for affine distortions based on a textural statistical model. Robustness to dynamic shift of KE points is achieved by evaluating the similarities on a subset of neighborhood patches around each KE point using the prior information regarding the directionality of specific facial action units acting on the respective KE point. The proposed method, as a result, improves ER accuracy and simultaneously reduces the computational complexity of ER system, as validated on multiple databases.Keywords: neutral vs. emotion classification, Constrained Local Model, procrustes analysis, Local Binary Pattern Histogram, statistical model
Procedia PDF Downloads 3391378 Combination between Intrusion Systems and Honeypots
Authors: Majed Sanan, Mohammad Rammal, Wassim Rammal
Abstract:
Today, security is a major concern. Intrusion Detection, Prevention Systems and Honeypot can be used to moderate attacks. Many researchers have proposed to use many IDSs ((Intrusion Detection System) time to time. Some of these IDS’s combine their features of two or more IDSs which are called Hybrid Intrusion Detection Systems. Most of the researchers combine the features of Signature based detection methodology and Anomaly based detection methodology. For a signature based IDS, if an attacker attacks slowly and in organized way, the attack may go undetected through the IDS, as signatures include factors based on duration of the events but the actions of attacker do not match. Sometimes, for an unknown attack there is no signature updated or an attacker attack in the mean time when the database is updating. Thus, signature-based IDS fail to detect unknown attacks. Anomaly based IDS suffer from many false-positive readings. So there is a need to hybridize those IDS which can overcome the shortcomings of each other. In this paper we propose a new approach to IDS (Intrusion Detection System) which is more efficient than the traditional IDS (Intrusion Detection System). The IDS is based on Honeypot Technology and Anomaly based Detection Methodology. We have designed Architecture for the IDS in a packet tracer and then implemented it in real time. We have discussed experimental results performed: both the Honeypot and Anomaly based IDS have some shortcomings but if we hybridized these two technologies, the newly proposed Hybrid Intrusion Detection System (HIDS) is capable enough to overcome these shortcomings with much enhanced performance. In this paper, we present a modified Hybrid Intrusion Detection System (HIDS) that combines the positive features of two different detection methodologies - Honeypot methodology and anomaly based intrusion detection methodology. In the experiment, we ran both the Intrusion Detection System individually first and then together and recorded the data from time to time. From the data we can conclude that the resulting IDS are much better in detecting intrusions from the existing IDSs.Keywords: security, intrusion detection, intrusion prevention, honeypot, anomaly-based detection, signature-based detection, cloud computing, kfsensor
Procedia PDF Downloads 3831377 Performance Evaluation of Using Genetic Programming Based Surrogate Models for Approximating Simulation Complex Geochemical Transport Processes
Authors: Hamed K. Esfahani, Bithin Datta
Abstract:
Transport of reactive chemical contaminant species in groundwater aquifers is a complex and highly non-linear physical and geochemical process especially for real life scenarios. Simulating this transport process involves solving complex nonlinear equations and generally requires huge computational time for a given aquifer study area. Development of optimal remediation strategies in aquifers may require repeated solution of such complex numerical simulation models. To overcome this computational limitation and improve the computational feasibility of large number of repeated simulations, Genetic Programming based trained surrogate models are developed to approximately simulate such complex transport processes. Transport process of acid mine drainage, a hazardous pollutant is first simulated using a numerical simulated model: HYDROGEOCHEM 5.0 for a contaminated aquifer in a historic mine site. Simulation model solution results for an illustrative contaminated aquifer site is then approximated by training and testing a Genetic Programming (GP) based surrogate model. Performance evaluation of the ensemble GP models as surrogate models for the reactive species transport in groundwater demonstrates the feasibility of its use and the associated computational advantages. The results show the efficiency and feasibility of using ensemble GP surrogate models as approximate simulators of complex hydrogeologic and geochemical processes in a contaminated groundwater aquifer incorporating uncertainties in historic mine site.Keywords: geochemical transport simulation, acid mine drainage, surrogate models, ensemble genetic programming, contaminated aquifers, mine sites
Procedia PDF Downloads 2771376 Towards a Robust Patch Based Multi-View Stereo Technique for Textureless and Occluded 3D Reconstruction
Authors: Ben Haines, Li Bai
Abstract:
Patch based reconstruction methods have been and still are one of the top performing approaches to 3D reconstruction to date. Their local approach to refining the position and orientation of a patch, free of global minimisation and independent of surface smoothness, make patch based methods extremely powerful in recovering fine grained detail of an objects surface. However, patch based approaches still fail to faithfully reconstruct textureless or highly occluded surface regions thus though performing well under lab conditions, deteriorate in industrial or real world situations. They are also computationally expensive. Current patch based methods generate point clouds with holes in texturesless or occluded regions that require expensive energy minimisation techniques to fill and interpolate a high fidelity reconstruction. Such shortcomings hinder the adaptation of the methods for industrial applications where object surfaces are often highly textureless and the speed of reconstruction is an important factor. This paper presents on-going work towards a multi-resolution approach to address the problems, utilizing particle swarm optimisation to reconstruct high fidelity geometry, and increasing robustness to textureless features through an adapted approach to the normalised cross correlation. The work also aims to speed up the reconstruction using advances in GPU technologies and remove the need for costly initialization and expansion. Through the combination of these enhancements, it is the intention of this work to create denser patch clouds even in textureless regions within a reasonable time. Initial results show the potential of such an approach to construct denser point clouds with a comparable accuracy to that of the current top-performing algorithms.Keywords: 3D reconstruction, multiview stereo, particle swarm optimisation, photo consistency
Procedia PDF Downloads 2041375 Sensitivity Analysis of Prestressed Post-Tensioned I-Girder and Deck System
Authors: Tahsin A. H. Nishat, Raquib Ahsan
Abstract:
Sensitivity analysis of design parameters of the optimization procedure can become a significant factor while designing any structural system. The objectives of the study are to analyze the sensitivity of deck slab thickness parameter obtained from both the conventional and optimum design methodology of pre-stressed post-tensioned I-girder and deck system and to compare the relative significance of slab thickness. For analysis on conventional method, the values of 14 design parameters obtained by the conventional iterative method of design of a real-life I-girder bridge project have been considered. On the other side for analysis on optimization method, cost optimization of this system has been done using global optimization methodology 'Evolutionary Operation (EVOP)'. The problem, by which optimum values of 14 design parameters have been obtained, contains 14 explicit constraints and 46 implicit constraints. For both types of design parameters, sensitivity analysis has been conducted on deck slab thickness parameter which can become too sensitive for the obtained optimum solution. Deviations of slab thickness on both the upper and lower side of its optimum value have been considered reflecting its realistic possible ranges of variations during construction. In this procedure, the remaining parameters have been kept unchanged. For small deviations from the optimum value, compliance with the explicit and implicit constraints has been examined. Variations in the cost have also been estimated. It is obtained that without violating any constraint deck slab thickness obtained by the conventional method can be increased up to 25 mm whereas slab thickness obtained by cost optimization can be increased only up to 0.3 mm. The obtained result suggests that slab thickness becomes less sensitive in case of conventional method of design. Therefore, for realistic design purpose sensitivity should be conducted for any of the design procedure of girder and deck system.Keywords: sensitivity analysis, optimum design, evolutionary operations, PC I-girder, deck system
Procedia PDF Downloads 1371374 Rights, Differences and Inclusion: The Role of Transdisciplinary Approach in the Education for Diversity
Authors: Ana Campina, Maria Manuela Magalhaes, Eusebio André Machado, Cristina Costa-Lobo
Abstract:
Inclusive school advocates respect for differences, for equal opportunities and for a quality education for all, including for students with special educational needs. In the pursuit of educational equity, guaranteeing equality in access and results, it becomes the responsibility of the school to recognize students' needs, adapting to the various styles and rhythms of learning, ensuring the adequacy of curricula, strategies and resources, materials and humans. This paper presents a set of theoretical reflections in the disciplinary interface between legal and education sciences, school administration and management, with the aim of understand the real inclusion characteristics in a balance with the inclusion policies and the need(s) of an education for Human Rights, especially for diversity. Considering the actual social complexity but the important education instruments and strategies, mostly patented in the policies, this paper aims expose the existing contexts opposed to the laws, policies and inclusion educational needs. More than a single study, this research aims to develop a map of the reality and the guidelines to implement the action. The results point to the usefulness and pertinence of a school in which educational managers, teachers, parents, and students, are involved in the creation, implementation and monitoring of flexible curricula and adapted to the educational needs of students, promoting a collaborative work among teachers. We are then faced with a scenario that points to the need to reflect on the legislation and curricular management of inclusive classes and to operationalize the processes of elaboration of curricular adaptations and differentiation in the classroom. The transdisciplinary is a pedagogic and social education perfect approach using the Human Rights binomio – teaching and learning – supported by the inclusion laws according to the realistic needs for an effective successful society construction.Keywords: rights, transdisciplinary, inclusion policies, education for diversity
Procedia PDF Downloads 3891373 ESP: Peculiarities of Teaching Psychology in English to Russian Students
Authors: Ekaterina A. Redkina
Abstract:
The necessity and importance of teaching professionally oriented content in English needs no proof nowadays. Consequently, the ability to share personal ESP teaching experience seems of great importance. This paper is based on the 8-year ESP and EFL teaching experience at the Moscow State Linguistic University, Moscow, Russia, and presents theoretical analysis of specifics, possible problems, and perspectives of teaching Psychology in English to Russian psychology-students. The paper concerns different issues that are common for different ESP classrooms, and familiar to different teachers. Among them are: designing ESP curriculum (for psychologists in this case), finding the balance between content and language in the classroom, main teaching principles (the 4 C’s), the choice of assessment techniques and teaching material. The main objective of teaching psychology in English to Russian psychology students is developing knowledge and skills essential for professional psychologists. Belonging to international professional community presupposes high-level content-specific knowledge and skills, high level of linguistic skills and cross-cultural linguistic ability and finally high level of professional etiquette. Thus, teaching psychology in English pursues 3 main outcomes, such as content, language and professional skills. The paper provides explanation of each of the outcomes. Examples are also given. Particular attention is paid to the lesson structure, its objectives and the difference between a typical EFL and ESP lesson. There is also made an attempt to find commonalities between teaching ESP and CLIL. There is an approach that states that CLIL is more common for schools, while ESP is more common for higher education. The paper argues that CLIL methodology can be successfully used in ESP teaching and that many CLIL activities are also well adapted for professional purposes. The research paper provides insights into the process of teaching psychologists in Russia, real teaching experience and teaching techniques that have proved efficient over time.Keywords: ESP, CLIL, content, language, psychology in English, Russian students
Procedia PDF Downloads 6091372 Simultaneous Adsorption and Characterization of NOx and SOx Emissions from Power Generation Plant on Sliced Porous Activated Carbon Prepared by Physical Activation
Authors: Muhammad Shoaib, Hassan M. Al-Swaidan
Abstract:
Air pollution has been a major challenge for the scientists today, due to the release of toxic emissions from various industries like power plants, desalination plants, industrial processes and transportation vehicles. Harmful emissions into the air represent an environmental pressure that reflects negatively on human health and productivity, thus leading to a real loss in the national economy. Variety of air pollutants in the form of carbon oxides, hydrocarbons, nitrogen oxides, sulfur oxides, suspended particulate material etc. are present in air due to the combustion of different types of fuels like crude oil, diesel oil and natural gas. Among various pollutants, NOx and SOx emissions are considered as highly toxic due to its carcinogenicity and its relation with various health disorders. In Kingdom of Saudi Arabia electricity is generated by burning of crude, diesel or natural gas in the turbines of electricity stations. Out of these three, crude oil is used extensively for electricity generation. Due to the burning of the crude oil there are heavy contents of gaseous pollutants like sulfur dioxides (SOx) and nitrogen oxides (NOx), gases which are ultimately discharged in to the environment and is a serious environmental threat. The breakthrough point in case of lab studies using 1 gm of sliced activated carbon adsorbant comes after 20 and 30 minutes for NOx and SOx, respectively, whereas in case of PP8 plant breakthrough point comes in seconds. The saturation point in case of lab studies comes after 100 and 120 minutes and for actual PP8 plant it comes after 60 and 90 minutes for NOx and SOx adsorption, respectively. Surface characterization of NOx and SOx adsorption on SAC confirms the presence of peaks in the FT-IR spectrum. CHNS study verifies that the SAC is suitable for NOx and SOx along with some other C and H containing compounds coming out from stack emission stream from the turbines of a power plant.Keywords: activated carbon, flue gases, NOx and SOx adsorption, physical activation, power plants
Procedia PDF Downloads 3471371 Heritage, Cultural Events and Promises for Better Future: Media Strategies for Attracting Tourism during the Arab Spring Uprisings
Authors: Eli Avraham
Abstract:
The Arab Spring was widely covered in the global media and the number of Western tourists traveling to the area began to fall. The goal of this study was to analyze which media strategies marketers in Middle Eastern countries chose to employ in their attempts to repair the negative image of the area in the wake of the Arab Spring. Several studies were published concerning image-restoration strategies of destinations during crises around the globe; however, these strategies were not part of an overarching theory, conceptual framework or model from the fields of crisis communication and image repair. The conceptual framework used in the current study was the ‘multi-step model for altering place image’, which offers three types of strategies: source, message and audience. Three research questions were used: 1.What public relations crisis techniques and advertising campaign components were used? 2. What media policies and relationships with the international media were adopted by Arab officials? 3. Which marketing initiatives (such as cultural and sports events) were promoted? This study is based on qualitative content analysis of four types of data: 1) advertising components (slogans, visuals and text); (2) press interviews with Middle Eastern officials and marketers; (3) official media policy adopted by government decision-maker (e.g. boycotting or arresting newspeople); and (4) marketing initiatives (e.g. organizing heritage festivals and cultural events). The data was located in three channels from December 2010, when the events started, to September 31, 2013: (1) Internet and video-sharing websites: YouTube and Middle Eastern countries' national tourism board websites; (2) News reports from two international media outlets, The New York Times and Ha’aretz; these are considered quality newspapers that focus on foreign news and tend to criticize institutions; (3) Global tourism news websites: eTurbo news and ‘Cities and countries branding’. Using the ‘multi-step model for altering place image,’ the analysis reveals that Middle Eastern marketers and officials used three kinds of strategies to repair their countries' negative image: 1. Source (cooperation and media relations; complying, threatening and blocking the media; and finding alternatives to the traditional media) 2. Message (ignoring, limiting, narrowing or reducing the scale of the crisis; acknowledging the negative effect of an event’s coverage and assuring a better future; promotion of multiple facets, exhibitions and softening the ‘hard’ image; hosting spotlight sporting and cultural events; spinning liabilities into assets; geographic dissociation from the Middle East region; ridicule the existing stereotype) and 3. Audience (changing the target audience by addressing others; emphasizing similarities and relevance to specific target audience). It appears that dealing with their image problems will continue to be a challenge for officials and marketers of Middle Eastern countries until the region stabilizes and its regional conflicts are resolved.Keywords: Arab spring, cultural events, image repair, Middle East, tourism marketing
Procedia PDF Downloads 2851370 Economic Impacts of Sanctuary and Immigration and Customs Enforcement Policies Inclusive and Exclusive Institutions
Authors: Alexander David Natanson
Abstract:
This paper focuses on the effect of Sanctuary and Immigration and Customs Enforcement (ICE) policies on local economies. "Sanctuary cities" refers to municipal jurisdictions that limit their cooperation with the federal government's efforts to enforce immigration. Using county-level data from the American Community Survey and ICE data on economic indicators from 2006 to 2018, this study isolates the effects of local immigration policies on U.S. counties. The investigation is accomplished by simultaneously studying the policies' effects in counties where immigrants' families are persecuted via collaboration with Immigration and Customs Enforcement (ICE), in contrast to counties that provide protections. The analysis includes a difference-in-difference & two-way fixed effect model. Results are robust to nearest-neighbor matching, after the random assignment of treatment, after running estimations using different cutoffs for immigration policies, and with a regression discontinuity model comparing bordering counties with opposite policies. Results are also robust after restricting the data to a single-year policy adoption, using the Sun and Abraham estimator, and with event-study estimation to deal with the staggered treatment issue. In addition, the study reverses the estimation to understand what drives the decision to choose policies to detect the presence of reverse causality biases in the estimated policy impact on economic factors. The evidence demonstrates that providing protections to undocumented immigrants increases economic activity. The estimates show gains in per capita income ranging from 3.1 to 7.2, median wages between 1.7 to 2.6, and GDP between 2.4 to 4.1 percent. Regarding labor, sanctuary counties saw increases in total employment between 2.3 to 4 percent, and the unemployment rate declined from 12 to 17 percent. The data further shows that ICE policies have no statistically significant effects on income, median wages, or GDP but adverse effects on total employment, with declines from 1 to 2 percent, mostly in rural counties, and an increase in unemployment of around 7 percent in urban counties. In addition, results show a decline in the foreign-born population in ICE counties but no changes in sanctuary counties. The study also finds similar results for sanctuary counties when separating the data between urban, rural, educational attainment, gender, ethnic groups, economic quintiles, and the number of business establishments. The takeaway from this study is that institutional inclusion creates the dynamic nature of an economy, as inclusion allows for economic expansion due to the extension of fundamental freedoms to newcomers. Inclusive policies show positive effects on economic outcomes with no evident increase in population. To make sense of these results, the hypothesis and theoretical model propose that inclusive immigration policies play an essential role in conditioning the effect of immigration by decreasing uncertainties and constraints for immigrants' interaction in their communities, decreasing the cost from fear of deportation or the constant fear of criminalization and optimize their human capital.Keywords: inclusive and exclusive institutions, post matching, fixed effect, time trend, regression discontinuity, difference-in-difference, randomization inference and sun, Abraham estimator
Procedia PDF Downloads 881369 Heliport Remote Safeguard System Based on Real-Time Stereovision 3D Reconstruction Algorithm
Authors: Ł. Morawiński, C. Jasiński, M. Jurkiewicz, S. Bou Habib, M. Bondyra
Abstract:
With the development of optics, electronics, and computers, vision systems are increasingly used in various areas of life, science, and industry. Vision systems have a huge number of applications. They can be used in quality control, object detection, data reading, e.g., QR-code, etc. A large part of them is used for measurement purposes. Some of them make it possible to obtain a 3D reconstruction of the tested objects or measurement areas. 3D reconstruction algorithms are mostly based on creating depth maps from data that can be acquired from active or passive methods. Due to the specific appliance in airfield technology, only passive methods are applicable because of other existing systems working on the site, which can be blinded on most spectral levels. Furthermore, reconstruction is required to work long distances ranging from hundreds of meters to tens of kilometers with low loss of accuracy even with harsh conditions such as fog, rain, or snow. In response to those requirements, HRESS (Heliport REmote Safeguard System) was developed; which main part is a rotational head with a two-camera stereovision rig gathering images around the head in 360 degrees along with stereovision 3D reconstruction and point cloud combination. The sub-pixel analysis introduced in the HRESS system makes it possible to obtain an increased distance measurement resolution and accuracy of about 3% for distances over one kilometer. Ultimately, this leads to more accurate and reliable measurement data in the form of a point cloud. Moreover, the program algorithm introduces operations enabling the filtering of erroneously collected data in the point cloud. All activities from the programming, mechanical and optical side are aimed at obtaining the most accurate 3D reconstruction of the environment in the measurement area.Keywords: airfield monitoring, artificial intelligence, stereovision, 3D reconstruction
Procedia PDF Downloads 1251368 Polymorphisms of the UM Genotype of CYP2C19*17 in Thais Taking Medical Cannabis
Authors: Athicha Cherdpunt, Patompong Satapornpong
Abstract:
The medical cannabis is made up of components also known as cannabinoids, which consists of two ingredients which are Δ9-tetrahydrocannabinol (THC) and cannabidiol (CBD). Interestingly, the Cannabinoid can be used for many treatments such as chemotherapy, including nausea and vomiting, cachexia, anorexia nervosa, spinal cord injury and disease, epilepsy, pain, and many others. However, the adverse drug reactions (ADRs) of THC can cause sedation, anxiety, dizziness, appetite stimulation and impairments in driving and cognitive function. Furthermore, genetic polymorphisms of CYP2C9, CYP2C19 and CYP3A4 influenced the THC metabolism and might be a cause of ADRs. Particularly, CYP2C19*17 allele increases gene transcription and therefore results in ultra-rapid metabolizer phenotype (UM). The aim of this study, is to investigate the frequency of CYP2C19*17 alleles in Thai patients who have been treated with medical cannabis. We prospectively enrolled 60 Thai patients who were treated with medical cannabis and clinical data from College of Pharmacy, Rangsit University. DNA of each patient was isolated from EDTA blood, using the Genomic DNA Mini Kit. CYP2C19*17 genotyping was conducted using the real time-PCR ViiA7 (ABI, Foster City, CA, USA). 30 patients with medical cannabis-induced ADRs group, 20 (67%) were female, and 10 (33%) were male, with an age range of 30-69 years. On the other hand, 30 patients without medical cannabis-induced ADRs (control group) consist of 17 (57%) female and 13 (43%) male. The most ADRs for medical cannabis treatment in the case group were dry mouth and dry throat (77%), tachycardia (70%), nausea (30%) and arrhythmia(10%). Accordingly, the case group carried CYP2C19*1/*1 (normal metabolizer) approximately 93%, while 7% patients carrying CYP2C19*1/*17 (ultra rapid metabolizers) exhibited in this group. Meanwhile, we found 90% of CYP2C19*1/*1 and 10% of CYP2C19*1/*17 in control group. In this study, we identified the frequency of CYP2C19*17 allele in Thai population which will support the pharmacogenetics biomarkers for screening and avoid ADRs of medical cannabis treatment.Keywords: CYP2C19, allele frequency, ultra rapid metabolizer, medical cannabis
Procedia PDF Downloads 1091367 Implications of Optimisation Algorithm on the Forecast Performance of Artificial Neural Network for Streamflow Modelling
Authors: Martins Y. Otache, John J. Musa, Abayomi I. Kuti, Mustapha Mohammed
Abstract:
The performance of an artificial neural network (ANN) is contingent on a host of factors, for instance, the network optimisation scheme. In view of this, the study examined the general implications of the ANN training optimisation algorithm on its forecast performance. To this end, the Bayesian regularisation (Br), Levenberg-Marquardt (LM), and the adaptive learning gradient descent: GDM (with momentum) algorithms were employed under different ANN structural configurations: (1) single-hidden layer, and (2) double-hidden layer feedforward back propagation network. Results obtained revealed generally that the gradient descent with momentum (GDM) optimisation algorithm, with its adaptive learning capability, used a relatively shorter time in both training and validation phases as compared to the Levenberg- Marquardt (LM) and Bayesian Regularisation (Br) algorithms though learning may not be consummated; i.e., in all instances considering also the prediction of extreme flow conditions for 1-day and 5-day ahead, respectively especially using the ANN model. In specific statistical terms on the average, model performance efficiency using the coefficient of efficiency (CE) statistic were Br: 98%, 94%; LM: 98 %, 95 %, and GDM: 96 %, 96% respectively for training and validation phases. However, on the basis of relative error distribution statistics (MAE, MAPE, and MSRE), GDM performed better than the others overall. Based on the findings, it is imperative to state that the adoption of ANN for real-time forecasting should employ training algorithms that do not have computational overhead like the case of LM that requires the computation of the Hessian matrix, protracted time, and sensitivity to initial conditions; to this end, Br and other forms of the gradient descent with momentum should be adopted considering overall time expenditure and quality of the forecast as well as mitigation of network overfitting. On the whole, it is recommended that evaluation should consider implications of (i) data quality and quantity and (ii) transfer functions on the overall network forecast performance.Keywords: streamflow, neural network, optimisation, algorithm
Procedia PDF Downloads 1521366 Trends in the Incidence of Bloodstream Infections in Patients with Hematological Malignancies in the Period 1991–2012
Authors: V. N. Chebotkevich, E. E. Schetinkina, V. V. Burylev, E. I. Kaytandzhan, N. P. Stizhak
Abstract:
Objective: Blood stream infections (BSI) are severe, life-threatening illness for immuno compromised patients with hematological malignancies. We report the trend in blood-stream infections in this group of patients in the period 1991-2013. Methods: A total of 4742 blood samples investigated. All blood cultures were incubated in a continuous monitoring system for 7 days before discarding negative. On signaled positive, organism was identified by conventional methods. The Real-time polymerase chain reaction (PCR) was used for the indication of human herpes virus 6 (HHV-6), Cytomegalovirus (CMV) and Epstein-Barr virus (EBV). Results: Between 1991 and 2001 the incidence of Gram-positive bacteria (Staphylococcus epidermidis, Staphylococcus aureus) being the most common germs isolated (70,9%) were as Gram-negative rods (Escherichia coli, Klebsiella spp., Pseudomonas spp.) – 29,1%. In next decade 2002-2012 the number of Gram-negative bacteria was increased up to 40.2%. It is shown that the incidence of bacteremia was significantly more frequent at the background of detectable Cytomegalovirus and Epstein-Barr virus-specific DNA in blood. Over recent years, an increased frequency of micro mycetes was registered in blood of the patients with hematological malignancies (Candida spp. was predominant). Conclusion: Accurate and timely detection of BSI is important in determining appropriate treatment of infectious complications in patients with hematological malignancies. The isolation of Staphylococcus epidermidis from blood cultures remains a clinical dilemma for physicians and microbiologists. But in many cases this agent is of the clinical significance in immunocompromised patients with hematological malignancies. The role of CMV and EBV in development of bacteremia was demonstrated.Keywords: infectious complications, blood stream infections, bacteremia, hemoblastosis
Procedia PDF Downloads 3521365 A Novel Heuristic for Analysis of Large Datasets by Selecting Wrapper-Based Features
Authors: Bushra Zafar, Usman Qamar
Abstract:
Large data sample size and dimensions render the effectiveness of conventional data mining methodologies. A data mining technique are important tools for collection of knowledgeable information from variety of databases and provides supervised learning in the form of classification to design models to describe vital data classes while structure of the classifier is based on class attribute. Classification efficiency and accuracy are often influenced to great extent by noisy and undesirable features in real application data sets. The inherent natures of data set greatly masks its quality analysis and leave us with quite few practical approaches to use. To our knowledge first time, we present a new approach for investigation of structure and quality of datasets by providing a targeted analysis of localization of noisy and irrelevant features of data sets. Machine learning is based primarily on feature selection as pre-processing step which offers us to select few features from number of features as a subset by reducing the space according to certain evaluation criterion. The primary objective of this study is to trim down the scope of the given data sample by searching a small set of important features which may results into good classification performance. For this purpose, a heuristic for wrapper-based feature selection using genetic algorithm and for discriminative feature selection an external classifier are used. Selection of feature based on its number of occurrence in the chosen chromosomes. Sample dataset has been used to demonstrate proposed idea effectively. A proposed method has improved average accuracy of different datasets is about 95%. Experimental results illustrate that proposed algorithm increases the accuracy of prediction of different diseases.Keywords: data mining, generic algorithm, KNN algorithms, wrapper based feature selection
Procedia PDF Downloads 3161364 Open Reading Frame Marker-Based Capacitive DNA Sensor for Ultrasensitive Detection of Escherichia coli O157:H7 in Potable Water
Authors: Rehan Deshmukh, Sunil Bhand, Utpal Roy
Abstract:
We report the label-free electrochemical detection of Escherichia coli O157:H7 (ATCC 43895) in potable water using a DNA probe as a sensing molecule targeting the open reading frame marker. Indium tin oxide (ITO) surface was modified with organosilane and, glutaraldehyde was applied as a linker to fabricate the DNA sensor chip. Non-Faradic electrochemical impedance spectroscopy (EIS) behavior was investigated at each step of sensor fabrication using cyclic voltammetry, impedance, phase, relative permittivity, capacitance, and admittance. Atomic force microscopy (AFM) and scanning electron microscopy (SEM) revealed significant changes in surface topographies of DNA sensor chip fabrication. The decrease in the percentage of pinholes from 2.05 (Bare ITO) to 1.46 (after DNA hybridization) suggested the capacitive behavior of the DNA sensor chip. The results of non-Faradic EIS studies of DNA sensor chip showed a systematic declining trend of the capacitance as well as the relative permittivity upon DNA hybridization. DNA sensor chip exhibited linearity in 0.5 to 25 pg/10mL for E. coli O157:H7 (ATCC 43895). The limit of detection (LOD) at 95% confidence estimated by logistic regression was 0.1 pg DNA/10mL of E. coli O157:H7 (equivalent to 13.67 CFU/10mL) with a p-value of 0.0237. Moreover, the fabricated DNA sensor chip used for detection of E. coli O157:H7 showed no significant cross-reactivity with closely and distantly related bacteria such as Escherichia coli MTCC 3221, Escherichia coli O78:H11 MTCC 723 and Bacillus subtilis MTCC 736. Consequently, the results obtained in our study demonstrated the possible application of developed DNA sensor chips for E. coli O157:H7 ATCC 43895 in real water samples as well.Keywords: capacitance, DNA sensor, Escherichia coli O157:H7, open reading frame marker
Procedia PDF Downloads 1441363 Visualization Tool for EEG Signal Segmentation
Authors: Sweeti, Anoop Kant Godiyal, Neha Singh, Sneh Anand, B. K. Panigrahi, Jayasree Santhosh
Abstract:
This work is about developing a tool for visualization and segmentation of Electroencephalograph (EEG) signals based on frequency domain features. Change in the frequency domain characteristics are correlated with change in mental state of the subject under study. Proposed algorithm provides a way to represent the change in the mental states using the different frequency band powers in form of segmented EEG signal. Many segmentation algorithms have been suggested in literature having application in brain computer interface, epilepsy and cognition studies that have been used for data classification. But the proposed method focusses mainly on the better presentation of signal and that’s why it could be a good utilization tool for clinician. Algorithm performs the basic filtering using band pass and notch filters in the range of 0.1-45 Hz. Advanced filtering is then performed by principal component analysis and wavelet transform based de-noising method. Frequency domain features are used for segmentation; considering the fact that the spectrum power of different frequency bands describes the mental state of the subject. Two sliding windows are further used for segmentation; one provides the time scale and other assigns the segmentation rule. The segmented data is displayed second by second successively with different color codes. Segment’s length can be selected as per need of the objective. Proposed algorithm has been tested on the EEG data set obtained from University of California in San Diego’s online data repository. Proposed tool gives a better visualization of the signal in form of segmented epochs of desired length representing the power spectrum variation in data. The algorithm is designed in such a way that it takes the data points with respect to the sampling frequency for each time frame and so it can be improved to use in real time visualization with desired epoch length.Keywords: de-noising, multi-channel data, PCA, power spectra, segmentation
Procedia PDF Downloads 397