Search results for: dynamic systems
8577 Intrusion Detection in SCADA Systems
Authors: Leandros A. Maglaras, Jianmin Jiang
Abstract:
The protection of the national infrastructures from cyberattacks is one of the main issues for national and international security. The funded European Framework-7 (FP7) research project CockpitCI introduces intelligent intrusion detection, analysis and protection techniques for Critical Infrastructures (CI). The paradox is that CIs massively rely on the newest interconnected and vulnerable Information and Communication Technology (ICT), whilst the control equipment, legacy software/hardware, is typically old. Such a combination of factors may lead to very dangerous situations, exposing systems to a wide variety of attacks. To overcome such threats, the CockpitCI project combines machine learning techniques with ICT technologies to produce advanced intrusion detection, analysis and reaction tools to provide intelligence to field equipment. This will allow the field equipment to perform local decisions in order to self-identify and self-react to abnormal situations introduced by cyberattacks. In this paper, an intrusion detection module capable of detecting malicious network traffic in a Supervisory Control and Data Acquisition (SCADA) system is presented. Malicious data in a SCADA system disrupt its correct functioning and tamper with its normal operation. OCSVM is an intrusion detection mechanism that does not need any labeled data for training or any information about the kind of anomaly is expecting for the detection process. This feature makes it ideal for processing SCADA environment data and automates SCADA performance monitoring. The OCSVM module developed is trained by network traces off line and detects anomalies in the system real time. The module is part of an IDS (intrusion detection system) developed under CockpitCI project and communicates with the other parts of the system by the exchange of IDMEF messages that carry information about the source of the incident, the time and a classification of the alarm.Keywords: cyber-security, SCADA systems, OCSVM, intrusion detection
Procedia PDF Downloads 5598576 The Effect of Ethylene Propylene Diene Monomer on the Rheological Properties of Bitumen
Authors: Emre Eren, Burak Yigit Katanalp, Murat Tastan, Perviz Ahmedzade, Çigdem Canbay Turkyilmaz, Emrah Turkyilmaz
Abstract:
This study aimed to investigate the mechanical and high-temperature rheological properties of Ethylene Propylene Diene Monomer (EPDM) modified bitumen. To achieve this, the neat binder was modified with EPDM additive in different percentages: 2% to 5%. The neat and modified binder were subjected to conventional and rheological tests, including penetration and softening point tests, as well as evaluations of their rutting performance and high-temperature viscosity characteristics. Additionally, the mixing and compaction temperatures for hot mix asphalt production were identified using a rotational viscometer. The findings indicated that EPDM is a highly effective bitumen modifier, with the high temperature performance class of the neat binder improving by 3 grades according to the Superpave asphalt grading system.Keywords: polymer, bitumen, rheology, EPDM, dynamic mechanical analysis
Procedia PDF Downloads 1308575 Effect of Surfactant Level of Microemulsions and Nanoemulsions on Cell Viability
Authors: Sonal Gupta, Rakhi Bansal, Javed Ali, Reema Gabrani, Shweta Dang
Abstract:
Nanoemulsions (NEs) and microemulsions (MEs) have been an attractive tool for encapsulation of both hydrophilic and lipophillic actives. Both these systems are composed of oil phase, surfactant, co-surfactant and aqueous phase. Depending upon the application and intended use, both oil-in-water and water-in-oil emulsions can be designed. NEs are fabricated using high energy methods employing less percentage of surfactant as compared to MEs which are self assembled drug delivery systems. Owing to the nanometric size of the droplets these systems have been widely used to enhance solubility and bioavailability of natural as well as synthetic molecules. The aim of the present study is to assess the effect of % age of surfactants on cell viability of Vero cells (African Green Monkeys’ Kidney epithelial cells) via MTT assay. Green tea catechin (Polyphenon 60) loaded ME employing low energy vortexing and NE employing high energy ultrasonication were prepared using same excipients (labrasol as oil, cremophor EL as surfactant and glycerol as co-surfactant) however, the % age of oil and surfactant needed to prepare the ME was higher as compared to NE. These formulations along with their excipients (oilME=13.3%, SmixME=26.67%; oilNE=10%, SmixNE=13.52%) were added to Vero cells for 24 hrs. The tetrazolium dye, 3-(4,5-dimethylthia/ol-2-yl)-2,5-diphi-iiyltclrazolium bromide (MTT), is reduced by live cells and this reaction is used as the end point to evaluate the cytoxicity level of a test formulation. Results of MTT assay indicated that oil at different percentages exhibited almost equal cell viability (oilME ≅ oilNE) while surfactant mixture had a significant difference in the cell viability values (SmixME < SmixNE). Polyphenon 60 loaded ME and its PlaceboME showed higher toxicity as compared to Polyphenon 60 loaded NE and its PlaceboNE that can be attributed to the higher concentration of surfactants present in MEs. Another probable reason for high % cell viability of Polyphenon 60 loaded NE might be due to the effective release of Polyphenon 60 from NE formulation that helps in the sustenance of Vero cells.Keywords: cell viability, microemulsion, MTT, nanoemulsion, surfactants, ultrasonication
Procedia PDF Downloads 4408574 Study on Errors in Estimating the 3D Gaze Point for Different Pupil Sizes Using Eye Vergences
Authors: M. Pomianek, M. Piszczek, M. Maciejewski
Abstract:
The binocular eye tracking technology is increasingly being used in industry, entertainment and marketing analysis. In the case of virtual reality, eye tracking systems are already the basis for user interaction with the environment. In such systems, the high accuracy of determining the user's eye fixation point is very important due to the specificity of the virtual reality head-mounted display (HMD). Often, however, there are unknown errors occurring in the used eye tracking technology, as well as those resulting from the positioning of the devices in relation to the user's eyes. However, can the virtual environment itself influence estimation errors? The paper presents mathematical analyses and empirical studies of the determination of the fixation point and errors resulting from the change in the size of the pupil in response to the intensity of the displayed scene. The article contains both static laboratory tests as well as on the real user. Based on the research results, optimization solutions were proposed that would reduce the errors of gaze estimation errors. Studies show that errors in estimating the fixation point of vision can be minimized both by improving the pupil positioning algorithm in the video image and by using more precise methods to calibrate the eye tracking system in three-dimensional space.Keywords: eye tracking, fixation point, pupil size, virtual reality
Procedia PDF Downloads 1368573 Antecedents of Knowledge Sharing: Investigating the Influence of Knowledge Sharing Factors towards Postgraduate Research Supervision
Authors: Arash Khosravi, Mohamad Nazir Ahmad
Abstract:
Today’s economy is a knowledge-based economy in which knowledge is a crucial facilitator to individuals, as well as being an instigator of success. Due to the impact of globalization, universities face new challenges and opportunities. Accordingly, they ought to be more innovative and have their own competitive advantages. One of the most important goals of universities is the promotion of students as professional knowledge workers. Therefore, knowledge sharing and transferring at tertiary level between students and supervisors is vital in universities, as it decreases the budget and provides an affordable way of doing research. Knowledge-sharing impact factors can be categorized into three groups, namely: organizational, individual and technical factors. There are some individual barriers to knowledge sharing, namely: lack of time and trust, lack of communication skills and social networks. IT systems such as e-learning, blogs and portals can increase knowledge sharing capability. However, it must be stated that IT systems are only tools and not solutions. Individuals are still responsible for sharing information and knowledge. This paper proposes new research model to examine the effect of individual factors and organisational factors, namely: learning strategy, trust culture, supervisory support, as well as technological factor on knowledge sharing in a research supervision process at the University of Technology Malaysia.Keywords: knowledge management, knowledge sharing, research supervision, knowledge transferring
Procedia PDF Downloads 4538572 Edmonton Urban Growth Model as a Support Tool for the City Plan Growth Scenarios Development
Authors: Sinisa J. Vukicevic
Abstract:
Edmonton is currently one of the youngest North American cities and has achieved significant growth over the past 40 years. Strong urban shift requires a new approach to how the city is envisioned, planned, and built. This approach is evidence-based scenario development, and an urban growth model was a key support tool in framing Edmonton development strategies, developing urban policies, and assessing policy implications. The urban growth model has been developed using the Metronamica software platform. The Metronamica land use model evaluated the dynamic of land use change under the influence of key development drivers (population and employment), zoning, land suitability, and land and activity accessibility. The model was designed following the Big City Moves ideas: become greener as we grow, develop a rebuildable city, ignite a community of communities, foster a healing city, and create a city of convergence. The Big City Moves were converted to three development scenarios: ‘Strong Central City’, ‘Node City’, and ‘Corridor City’. Each scenario has a narrative story that expressed scenario’s high level goal, scenario’s approach to residential and commercial activities, to transportation vision, and employment and environmental principles. Land use demand was calculated for each scenario according to specific density targets. Spatial policies were analyzed according to their level of importance within the policy set definition for the specific scenario, but also through the policy measures. The model was calibrated on the way to reproduce known historical land use pattern. For the calibration, we used 2006 and 2011 land use data. The validation is done independently, which means we used the data we did not use for the calibration. The model was validated with 2016 data. In general, the modeling process contain three main phases: ‘from qualitative storyline to quantitative modelling’, ‘model development and model run’, and ‘from quantitative modelling to qualitative storyline’. The model also incorporates five spatial indicators: distance from residential to work, distance from residential to recreation, distance to river valley, urban expansion and habitat fragmentation. The major finding of this research could be looked at from two perspectives: the planning perspective and technology perspective. The planning perspective evaluates the model as a tool for scenario development. Using the model, we explored the land use dynamic that is influenced by a different set of policies. The model enables a direct comparison between the three scenarios. We explored the similarities and differences of scenarios and their quantitative indicators: land use change, population change (and spatial allocation), job allocation, density (population, employment, and dwelling unit), habitat connectivity, proximity to objects of interest, etc. From the technology perspective, the model showed one very important characteristic: the model flexibility. The direction for policy testing changed many times during the consultation process and model flexibility in applying all these changes was highly appreciated. The model satisfied our needs as scenario development and evaluation tool, but also as a communication tool during the consultation process.Keywords: urban growth model, scenario development, spatial indicators, Metronamica
Procedia PDF Downloads 978571 Self in Networks: Public Sphere in the Era of Globalisation
Authors: Sanghamitra Sadhu
Abstract:
A paradigm shift from capitalism to information technology is discerned in the era globalisation. The idea of public sphere, which was theorized in terms of its decline in the wake of the rise of commercial mass media has now emerged as a transnational or global sphere with the discourse being dominated by the ‘network society’. In other words, the dynamic of globalisation has brought about ‘a spatial turn’ in the social and political sciences which is also manifested in the public sphere, Especially the global public sphere. The paper revisits the Habermasian concept of the public sphere and focuses on the various social networking sites with their plausibility to create a virtual global public sphere. Situating Habermas’s notion of the bourgeois public sphere in the present context of global public sphere, it considers the changing dimensions of the public sphere across time and examines the concept of the ‘public’ with its shifting transformation from the concrete collective to the fluid ‘imagined’ category. The paper addresses the problematic of multimodal self-portraiture in the social networking sites as well as various online diaries/journals with an attempt to explore the nuances of the networked self.Keywords: globalisation, network society, public sphere, self-fashioning, identity, autonomy
Procedia PDF Downloads 4228570 Techno Economic Analysis of CAES Systems Integrated into Gas-Steam Combined Plants
Authors: Coriolano Salvini
Abstract:
The increasing utilization of renewable energy sources for electric power production calls for the introduction of energy storage systems to match the electric demand along the time. Although many countries are pursuing as a final goal a “decarbonized” electrical system, in the next decades the traditional fossil fuel fed power plant still will play a relevant role in fulfilling the electric demand. Presently, such plants provide grid ancillary services (frequency control, grid balance, reserve, etc.) by adapting the output power to the grid requirements. An interesting option is represented by the possibility to use traditional plants to improve the grid storage capabilities. The present paper is addressed to small-medium size systems suited for distributed energy storage. The proposed Energy Storage System (ESS) is based on a Compressed Air Energy Storage (CAES) integrated into a Gas-Steam Combined Cycle (GSCC) or a Gas Turbine based CHP plants. The systems can be incorporated in an ex novo built plant or added to an already existing one. To avoid any geological restriction related to the availability of natural compressed air reservoirs, artificial storage is addressed. During the charging phase, electric power is absorbed from the grid by an electric driven intercooled/aftercooled compressor. In the course of the discharge phase, the compressed stored air is sent to a heat transfer device fed by hot gas taken upstream the Heat Recovery Steam Generator (HRSG) and subsequently expanded for power production. To maximize the output power, a staged reheated expansion process is adopted. The specific power production related to the kilogram per second of exhaust gas used to heat the stored air is two/three times larger than that achieved if the gas were used to produce steam in the HRSG. As a result, a relevant power augmentation is attained with respect to normal GSCC plant operations without additional use of fuel. Therefore, the excess of output power can be considered “fuel free” and the storage system can be compared to “pure” ESSs such as electrochemical, pumped hydro or adiabatic CAES. Representative cases featured by different power absorption, production capability, and storage capacity have been taken into consideration. For each case, a technical optimization aimed at maximizing the storage efficiency has been carried out. On the basis of the resulting storage pressure and volume, number of compression and expansion stages, air heater arrangement and process quantities found for each case, a cost estimation of the storage systems has been performed. Storage efficiencies from 0.6 to 0.7 have been assessed. Capital costs in the range of 400-800 €/kW and 500-1000 €/kWh have been estimated. Such figures are similar or lower to those featuring alternative storage technologies.Keywords: artificial air storage reservoir, compressed air energy storage (CAES), gas steam combined cycle (GSCC), techno-economic analysis
Procedia PDF Downloads 2168569 Business Skills Laboratory in Action: Combining a Practice Enterprise Model and an ERP-Simulation to a Comprehensive Business Learning Environment
Authors: Karoliina Nisula, Samuli Pekkola
Abstract:
Business education has been criticized for being too theoretical and distant from business life. Different types of experiential learning environments ranging from manual role-play to computer simulations and enterprise resource planning (ERP) systems have been used to introduce the realistic and practical experience into business learning. Each of these learning environments approaches business learning from a different perspective. The implementations tend to be individual exercises supplementing the traditional courses. We suggest combining them into a business skills laboratory resembling an actual workplace. In this paper, we present a concrete implementation of an ERP-supported business learning environment that is used throughout the first year undergraduate business curriculum. We validate the implementation by evaluating the learning outcomes through the different domains of Bloom’s taxonomy. We use the role-play oriented practice enterprise model as a comparison group. Our findings indicate that using the ERP simulation improves the poor and average students’ lower-level cognitive learning. On the affective domain, the ERP-simulation appears to enhance motivation to learn as well as perceived acquisition of practical hands-on skills.Keywords: business simulations, experiential learning, ERP systems, learning environments
Procedia PDF Downloads 2648568 Validation and Fit of a Biomechanical Bipedal Walking Model for Simulation of Loads Induced by Pedestrians on Footbridges
Authors: Dianelys Vega, Carlos Magluta, Ney Roitman
Abstract:
The simulation of loads induced by walking people in civil engineering structures is still challenging It has been the focus of considerable research worldwide in the recent decades due to increasing number of reported vibration problems in pedestrian structures. One of the most important key in the designing of slender structures is the Human-Structure Interaction (HSI). How moving people interact with structures and the effect it has on their dynamic responses is still not well understood. To rely on calibrated pedestrian models that accurately estimate the structural response becomes extremely important. However, because of the complexity of the pedestrian mechanisms, there are still some gaps in knowledge and more reliable models need to be investigated. On this topic several authors have proposed biodynamic models to represent the pedestrian, whether these models provide a consistent approximation to physical reality still needs to be studied. Therefore, this work comes to contribute to a better understanding of this phenomenon bringing an experimental validation of a pedestrian walking model and a Human-Structure Interaction model. In this study, a bi-dimensional bipedal walking model was used to represent the pedestrians along with an interaction model which was applied to a prototype footbridge. Numerical models were implemented in MATLAB. In parallel, experimental tests were conducted in the Structures Laboratory of COPPE (LabEst), at Federal University of Rio de Janeiro. Different test subjects were asked to walk at different walking speeds over instrumented force platforms to measure the walking force and an accelerometer was placed at the waist of each subject to measure the acceleration of the center of mass at the same time. By fitting the step force and the center of mass acceleration through successive numerical simulations, the model parameters are estimated. In addition, experimental data of a walking pedestrian on a flexible structure was used to validate the interaction model presented, through the comparison of the measured and simulated structural response at mid span. It was found that the pedestrian model was able to adequately reproduce the ground reaction force and the center of mass acceleration for normal and slow walking speeds, being less efficient for faster speeds. Numerical simulations showed that biomechanical parameters such as leg stiffness and damping affect the ground reaction force, and the higher the walking speed the greater the leg length of the model. Besides, the interaction model was also capable to estimate with good approximation the structural response, that remained in the same order of magnitude as the measured response. Some differences in frequency spectra were observed, which are presumed to be due to the perfectly periodic loading representation, neglecting intra-subject variabilities. In conclusion, this work showed that the bipedal walking model could be used to represent walking pedestrians since it was efficient to reproduce the center of mass movement and ground reaction forces produced by humans. Furthermore, although more experimental validations are required, the interaction model also seems to be a useful framework to estimate the dynamic response of structures under loads induced by walking pedestrians.Keywords: biodynamic models, bipedal walking models, human induced loads, human structure interaction
Procedia PDF Downloads 1358567 A Geometrical Method for the Smoluchowski Equation on the Sphere
Authors: Adriano Valdes-Gomez, Francisco Javier Sevilla
Abstract:
We devise a numerical algorithm to simulate the diffusion of a Brownian particle restricted to the surface of a three-dimensional sphere when the particle is under the effects of an external potential that is coupled linearly. It is obtained using elementary geometry, yet, it converges, in the weak sense, to the solutions to the Smoluchowski equation. Rotations on the sphere, which are the analogs of linear displacements in euclidean spaces, are calculated using algebraic operations and then by a proper scaling, which makes the algorithm efficient and quite simple, especially to what may be the short-time propagator approach. Our findings prove that the global effects of curvature are taken into account in both dynamic and stationary processes, and it is not restricted to work in configuration space, neither restricted to the overdamped limit. We have generalized it successfully to simulate the Kramers or the Ornstein-Uhlenbeck process, where it is necessary to work directly in phase space, and it may be adapted to other two dimensional surfaces with non-constant curvature.Keywords: diffusion on the sphere, Fokker-Planck equation on the sphere, non equilibrium processes on the sphere, numerical methods for diffusion on the sphere
Procedia PDF Downloads 1858566 Simple Rheological Method to Estimate the Branch Structures of Polyethylene under Reactive Modification
Authors: Mahdi Golriz
Abstract:
The aim of this work is to estimate the change in molecular structure of linear low-density polyethylene (LLDPE) during peroxide modification can be detected by a simple rheological method. For this purpose a commercial grade LLDPE (Exxon MobileTM LL4004EL) was reacted with different doses of dicumyl peroxide (DCP). The samples were analyzed by size-exclusion chromatography coupled with a light scattering detector. The dynamic shear oscillatory measurements showed a deviation of the δ-׀G ׀٭curve from that of the linear LLDPE, which can be attributed to the presence of long-chain branching (LCB). By the use of a simple rheological method that utilizes melt rheology, transformations in molecular architecture induced on an originally linear low density polyethylene during the early stages of reactive modification were indicated. Reasonable and consistent estimates are obtained, concerning the degree of LCB, the volume fraction of the various molecular species produced in peroxide modification of LLDPE.Keywords: linear low-density polyethylene, peroxide modification, long-chain branching, rheological method
Procedia PDF Downloads 1608565 Seismic Response of Reinforced Concrete Buildings: Field Challenges and Simplified Code Formulas
Authors: Michel Soto Chalhoub
Abstract:
Building code-related literature provides recommendations on normalizing approaches to the calculation of the dynamic properties of structures. Most building codes make a distinction among types of structural systems, construction material, and configuration through a numerical coefficient in the expression for the fundamental period. The period is then used in normalized response spectra to compute base shear. The typical parameter used in simplified code formulas for the fundamental period is overall building height raised to a power determined from analytical and experimental results. However, reinforced concrete buildings which constitute the majority of built space in less developed countries pose additional challenges to the ones built with homogeneous material such as steel, or with concrete under stricter quality control. In the present paper, the particularities of reinforced concrete buildings are explored and related to current methods of equivalent static analysis. A comparative study is presented between the Uniform Building Code, commonly used for buildings within and outside the USA, and data from the Middle East used to model 151 reinforced concrete buildings of varying number of bays, number of floors, overall building height, and individual story height. The fundamental period was calculated using eigenvalue matrix computation. The results were also used in a separate regression analysis where the computed period serves as dependent variable, while five building properties serve as independent variables. The statistical analysis shed light on important parameters that simplified code formulas need to account for including individual story height, overall building height, floor plan, number of bays, and concrete properties. Such inclusions are important for reinforced concrete buildings of special conditions due to the level of concrete damage, aging, or materials quality control during construction. Overall results of the present analysis show that simplified code formulas for fundamental period and base shear may be applied but they require revisions to account for multiple parameters. The conclusion above is confirmed by the analytical model where fundamental periods were computed using numerical techniques and eigenvalue solutions. This recommendation is particularly relevant to code upgrades in less developed countries where it is customary to adopt, and mildly adapt international codes. We also note the necessity of further research using empirical data from buildings in Lebanon that were subjected to severe damage due to impulse loading or accelerated aging. However, we excluded this study from the present paper and left it for future research as it has its own peculiarities and requires a different type of analysis.Keywords: seismic behaviour, reinforced concrete, simplified code formulas, equivalent static analysis, base shear, response spectra
Procedia PDF Downloads 2338564 Simulation of Stretching and Fragmenting DNA by Microfluidic for Optimizing Microfluidic Devices
Authors: Shuyi Wu, Chuang Li, Quanshui Zheng, Luping Xu
Abstract:
Stretching and snipping DNA molecule by microfluidic has important application value in gene analysis by lab on a chip. Movement, deformation and fragmenting of DNA in microfluidic are typical fluid-solid coupling problems. An efficient and common simulation system for researching the movement, deformation and fragmenting of DNA by microfluidic has not been well developed. In our study, Brownian dynamics-finite element method (BD-FEM) is used to simulate the dynamic process of stretching and fragmenting DNA by contraction flow. The shape and parameters of micro-channels are changed to optimize the stretching and fragmenting properties of DNA. Our results indicate that strain rate, resulting from contraction microchannel, is the main control parameter for stretching and fragmenting DNA. There is good consistency between the simulation data and previous experimental result about the single DNA molecule behavior and averaged fragmenting properties in this study. BD-FEM method is an efficient calculating tool to research stretching and fragmenting behavior of single DNA molecule and optimize microfluidic devices for manipulating, stretching and fragmenting DNA.Keywords: fragmenting, DNA, microfluidic, optimize.
Procedia PDF Downloads 3328563 Drinking Water Quality Assessment Using Fuzzy Inference System Method: A Case Study of Rome, Italy
Authors: Yas Barzegar, Atrin Barzegar
Abstract:
Drinking water quality assessment is a major issue today; technology and practices are continuously improving; Artificial Intelligence (AI) methods prove their efficiency in this domain. The current research seeks a hierarchical fuzzy model for predicting drinking water quality in Rome (Italy). The Mamdani fuzzy inference system (FIS) is applied with different defuzzification methods. The Proposed Model includes three fuzzy intermediate models and one fuzzy final model. Each fuzzy model consists of three input parameters and 27 fuzzy rules. The model is developed for water quality assessment with a dataset considering nine parameters (Alkalinity, Hardness, pH, Ca, Mg, Fluoride, Sulphate, Nitrates, and Iron). Fuzzy-logic-based methods have been demonstrated to be appropriate to address uncertainty and subjectivity in drinking water quality assessment; it is an effective method for managing complicated, uncertain water systems and predicting drinking water quality. The FIS method can provide an effective solution to complex systems; this method can be modified easily to improve performance.Keywords: water quality, fuzzy logic, smart cities, water attribute, fuzzy inference system, membership function
Procedia PDF Downloads 818562 Dynamic Model of Heterogeneous Markets with Imperfect Information for the Optimization of Company's Long-Time Strategy
Authors: Oleg Oborin
Abstract:
This paper is dedicated to the development of the model, which can be used to evaluate the effectiveness of long-term corporate strategies and identify the best strategies. The theoretical model of the relatively homogenous product market (such as iron and steel industry, mobile services or road transport) has been developed. In the model, the market consists of a large number of companies with different internal characteristics and objectives. The companies can perform mergers and acquisitions in order to increase their market share. The model allows the simulation of long-time dynamics of the market (for a period longer than 20 years). Therefore, a large number of simulations on random input data was conducted in the framework of the model. After that, the results of the model were compared with the dynamics of real markets, such as the US steel industry from the beginning of the XX century to the present day, and the market of mobile services in Germany for the period between 1990 and 2015.Keywords: Economic Modelling, Long-Time Strategy, Mergers and Acquisitions, Simulation
Procedia PDF Downloads 3718561 Proteomic Analysis of Excretory Secretory Antigen (ESA) from Entamoeba histolytica HM1: IMSS
Authors: N. Othman, J. Ujang, M. N. Ismail, R. Noordin, B. H. Lim
Abstract:
Amoebiasis is caused by the Entamoeba histolytica and still endemic in many parts of the tropical region, worldwide. Currently, there is no available vaccine against amoebiasis. Hence, there is an urgent need to develop a vaccine. The excretory secretory antigen (ESA) of E. histolytica is a suitable biomarker for the vaccine candidate since it can modulate the host immune response. Hence, the objective of this study is to identify the proteome of the ESA towards finding suitable biomarker for the vaccine candidate. The non-gel based and gel-based proteomics analyses were performed to identify proteins. Two kinds of mass spectrometry with different ionization systems were utilized i.e. LC-MS/MS (ESI) and MALDI-TOF/TOF. Then, the functional proteins classification analysis was performed using PANTHER software. Combination of the LC -MS/MS for the non-gel based and MALDI-TOF/TOF for the gel-based approaches identified a total of 273 proteins from the ESA. Both systems identified 29 similar proteins whereby 239 and 5 more proteins were identified by LC-MS/MS and MALDI-TOF/TOF, respectively. Functional classification analysis showed the majority of proteins involved in the metabolic process (24%), primary metabolic process (19%) and protein metabolic process (10%). Thus, this study has revealed the proteome the E. histolytica ESA and the identified proteins merit further investigations as a vaccine candidate.Keywords: E. histolytica, ESA, proteomics, biomarker
Procedia PDF Downloads 3468560 Development of Tools for Multi Vehicles Simulation with Robot Operating System and ArduPilot
Authors: Pierre Kancir, Jean-Philippe Diguet, Marc Sevaux
Abstract:
One of the main difficulties in developing multi-robot systems (MRS) is related to the simulation and testing tools available. Indeed, if the differences between simulations and real robots are too significant, the transition from the simulation to the robot won’t be possible without another long development phase and won’t permit to validate the simulation. Moreover, the testing of different algorithmic solutions or modifications of robots requires a strong knowledge of current tools and a significant development time. Therefore, the availability of tools for MRS, mainly with flying drones, is crucial to enable the industrial emergence of these systems. This research aims to present the most commonly used tools for MRS simulations and their main shortcomings and presents complementary tools to improve the productivity of designers in the development of multi-vehicle solutions focused on a fast learning curve and rapid transition from simulations to real usage. The proposed contributions are based on existing open source tools as Gazebo simulator combined with ROS (Robot Operating System) and the open-source multi-platform autopilot ArduPilot to bring them to a broad audience.Keywords: ROS, ArduPilot, MRS, simulation, drones, Gazebo
Procedia PDF Downloads 2158559 A Novel Way to Create Qudit Quantum Error Correction Codes
Authors: Arun Moorthy
Abstract:
Quantum computing promises to provide algorithmic speedups for a number of tasks; however, similar to classical computing, effective error-correcting codes are needed. Current quantum computers require costly equipment to control each particle, so having fewer particles to control is ideal. Although traditional quantum computers are built using qubits (2-level systems), qudits (more than 2-levels) are appealing since they can have an equivalent computational space using fewer particles, meaning fewer particles need to be controlled. Currently, qudit quantum error-correction codes are available for different level qudit systems; however, these codes have sometimes overly specific constraints. When building a qudit system, it is important for researchers to have access to many codes to satisfy their requirements. This project addresses two methods to increase the number of quantum error correcting codes available to researchers. The first method is generating new codes for a given set of parameters. The second method is generating new error-correction codes by using existing codes as a starting point to generate codes for another level (i.e., a 5-level system code on a 2-level system). So, this project builds a website that researchers can use to generate new error-correction codes or codes based on existing codes.Keywords: qudit, error correction, quantum, qubit
Procedia PDF Downloads 1658558 Effective Planning of Public Transportation Systems: A Decision Support Application
Authors: Ferdi Sönmez, Nihal Yorulmaz
Abstract:
Decision making on the true planning of the public transportation systems to serve potential users is a must for metropolitan areas. To take attraction of travelers to projected modes of transport, adequately fair overall travel times should be provided. In this fashion, other benefits such as lower traffic congestion, road safety and lower noise and atmospheric pollution may be earned. The congestion which comes with increasing demand of public transportation is becoming a part of our lives and making residents’ life difficult. Hence, regulations should be done to reduce this congestion. To provide a constructive and balanced regulation in public transportation systems, right stations should be located in right places. In this study, it is aimed to design and implement a Decision Support System (DSS) Application to determine the optimal bus stop places for public transport in Istanbul which is one of the biggest and oldest cities in the world. Required information is gathered from IETT (Istanbul Electricity, Tram and Tunnel) Enterprises which manages all public transportation services in Istanbul Metropolitan Area. By using the most real-like values, cost assignments are made. The cost is calculated with the help of equations produced by bi-level optimization model. For this study, 300 buses, 300 drivers, 10 lines and 110 stops are used. The user cost of each station and the operator cost taken place in lines are calculated. Some components like cost, security and noise pollution are considered as significant factors affecting the solution of set covering problem which is mentioned for identifying and locating the minimum number of possible bus stops. Preliminary research and model development for this study refers to previously published article of the corresponding author. Model results are represented with the intent of decision support to the specialists on locating stops effectively.Keywords: operator cost, bi-level optimization model, user cost, urban transportation
Procedia PDF Downloads 2508557 Dynamical Models for Enviromental Effect Depuration for Structural Health Monitoring of Bridges
Authors: Francesco Morgan Bono, Simone Cinquemani
Abstract:
This research aims to enhance bridge monitoring by employing innovative techniques that incorporate exogenous factors into the modeling of sensor signals, thereby improving long-term predictability beyond traditional static methods. Using real datasets from two different bridges equipped with Linear Variable Displacement Transducer (LVDT) sensors, the study investigates the fundamental principles governing sensor behavior for more precise long-term forecasts. Additionally, the research evaluates performance on noisy and synthetically damaged data, proposing a residual-based alarm system to detect anomalies in the bridge. In summary, this novel approach combines advanced modeling, exogenous factors, and anomaly detection to extend prediction horizons and improve preemptive damage recognition, significantly advancing structural health monitoring practices.Keywords: structural health monitoring, dynamic models, sindy, railway bridges
Procedia PDF Downloads 498556 Adsorption Kinetics and Equilibria at an Air-Liquid Interface of Biosurfactant and Synthetic Surfactant
Authors: Sagheer A. Onaizi
Abstract:
The adsorption of anionic biosurfactant (surfactin) and anionic synthetic surfactant (sodium dodecylbenzenesulphonate, abbreviated as SDOBS) from phosphate buffer containing high concentrations of co- and counter-ions to the air-buffer interface has been investigated. The self-assembly of the two surfactants at the interface has been monitored through dynamic surface tension measurements. The equilibrium surface pressure-surfactant concentration data in the premicellar region were regressed using Gibbs adsorption equation. The predicted surface saturations for SDOBS and surfactin are and, respectively. The occupied area per an SDOBS molecule at the interface saturation condition is while that occupied by a surfactin molecule is. The surface saturations reported in this work for both surfactants are in a very good agreement with those obtained using expensive techniques such as neutron reflectometry, suggesting that the surface tension measurements coupled with appropriate theoretical analysis could provide useful information comparable to those obtained using highly sophisticated techniques.Keywords: adsorption, air-liquid interface, biosurfactant, surface tension
Procedia PDF Downloads 7168555 Management and Conservation of Crop Biodiversity in Karnali Mountains of Nepal
Authors: Chhabi Paudel
Abstract:
The food and nutrition security of the people of the mountain of Karnali province of Nepal is dependent on traditional crop biodiversity. The altitude range of the study area is 1800 meters to 2700 meters above sea level. The climate is temperate to alpine. Farmers are adopting subsistent oriented diversified farming systems and selected crop species, cultivars, and local production systems by their own long adaptation mechanism. The major crop species are finger millet, proso millet, foxtail millet, potato, barley, wheat, mountain rice, buckwheat, Amaranths, medicinal plants, and many vegetable species. The genetic and varietal diversity of those underutilized indigenous crops is also very high, which has sustained farming even in uneven climatic events. Biodiversity provides production synergy, inputs, and other agro-ecological services for self-sustainability. But increase in human population and urban accessibility are seen as threats to biodiversity conservation. So integrated conservation measures are suggested, including agro-tourism and other monetary benefits to the farmers who conserve the local biodiversity.Keywords: crop biodiversity, climate change, in-situ conservation, resilience, sustainability, agrotourism
Procedia PDF Downloads 1058554 The Influence of the State on the Internal Governance of Universities: A Comparative Study of Quebec (Canada) and Western Systems
Authors: Alexandre Beaupré-Lavallée, Pier-André Bouchard St-Amant, Nathalie Beaulac
Abstract:
The question of internal governance of universities is a political and scientific debate in the province of Quebec (Canada). Governments have called or set up inquiries on the subject on three separate occasions since the complete overhaul of the educational system in the 1960s: the Parent Commission (1967), the Angers Commission (1979) and the Summit on Higher Education (2013). All three produced reports that highlight the constant tug-of-war for authority and legitimacy within universities. Past and current research that cover Quebec universities have studied several aspects regarding internal governance: the structure as a whole or only some parts of it, the importance of certain key aspects such as collegiality or strategic planning, or of stakeholders, such as students or administrators. External governance has also been studied, though, as with internal governance, research so far as only covered well delineated topics like financing policies or overall impacts from wider societal changes such as New Public Management. The latter, NPM, is often brought up as a factor that influenced overall State policies like “steering-at-a-distance” or internal shifts towards “managerialism”. Yet, to the authors’ knowledge, there is not study that specifically maps how the Quebec State formally influences internal governance. In addition, most studies about the Quebec university system are not comparative in nature. This paper presents a portion of the results produced by a 2022- 2023 study that aims at filling these last two gaps in knowledge. Building on existing governmental, institutional, and scientific papers, we documented the legal and regulatory framework of the Quebec university system and of twenty-one other university systems in North America and Europe (2 in Canada, 2 in the USA, 16 in Europe, with the addition of the European Union as a distinct case). This allowed us to map the presence (or absence) of mandatory structures of governance enforced by States, as well as their composition. Then, using Clark’s “triangle of coordination”, we analyzed each system to assess the relative influences of the market, the State and the collegium upon the governance model put in place. Finally, we compared all 21 non-Quebec systems to characterize the province’s policies in an internal perspective. Preliminary findings are twofold. First, when all systems are placed on a continuum ranging from “no State interference in internal governance” to “State-run universities”, Quebec comes in the middle of the pack, albeit with a slight lean towards institutional freedom. When it comes to overall governance (like Boards and Senates), the dual nature of the Quebec system, with its public university and its coopted yet historically private (or ecclesiastic) institutions, in fact mimics the duality of all university systems. Second, however, is the sheer abundance of legal and regulatory mandates from the State that, while not expressly addressing internal governance, seems to require de facto modification of internal governance structure and dynamics to ensure institutional conformity with said mandates. This study is only a fraction of the research that is needed to better understand State-universities interactions regarding governance. We hope it will set the stage for future studies.Keywords: internal governance, legislation, Quebec, universities
Procedia PDF Downloads 888553 Portable System for the Acquisition and Processing of Electrocardiographic Signals to Obtain Different Metrics of Heart Rate Variability
Authors: Daniel F. Bohorquez, Luis M. Agudelo, Henry H. León
Abstract:
Heart rate variability (HRV) is defined as the temporary variation between heartbeats or RR intervals (distance between R waves in an electrocardiographic signal). This distance is currently a recognized biomarker. With the analysis of the distance, it is possible to assess the sympathetic and parasympathetic nervous systems. These systems are responsible for the regulation of the cardiac muscle. The analysis allows health specialists and researchers to diagnose various pathologies based on this variation. For the acquisition and analysis of HRV taken from a cardiac electrical signal, electronic equipment and analysis software that work independently are currently used. This complicates and delays the process of interpretation and diagnosis. With this delay, the health condition of patients can be put at greater risk. This can lead to an untimely treatment. This document presents a single portable device capable of acquiring electrocardiographic signals and calculating a total of 19 HRV metrics. This reduces the time required, resulting in a timelier intervention. The device has an electrocardiographic signal acquisition card attached to a microcontroller capable of transmitting the cardiac signal wirelessly to a mobile device. In addition, a mobile application was designed to analyze the cardiac waveform. The device calculates the RR and different metrics. The application allows a user to visualize in real-time the cardiac signal and the 19 metrics. The information is exported to a cloud database for remote analysis. The study was performed under controlled conditions in the simulated hospital of the Universidad de la Sabana, Colombia. A total of 60 signals were acquired and analyzed. The device was compared against two reference systems. The results show a strong level of correlation (r > 0.95, p < 0.05) between the 19 metrics compared. Therefore, the use of the portable system evaluated in clinical scenarios controlled by medical specialists and researchers is recommended for the evaluation of the condition of the cardiac system.Keywords: biological signal análisis, heart rate variability (HRV), HRV metrics, mobile app, portable device.
Procedia PDF Downloads 1878552 A Contrastive Analysis on Hausa and Yoruba Adjectival Phrases
Authors: Abubakar Maikudi
Abstract:
Contrastive analysis is the method of analyzing the structure of any two languages with a view to determining the possible differential aspects of their systems irrespective of their genetic affinity or level of development. Contrastive analysis of two languages becomes useful when it is adequately describing the sound structure and grammatical structure of two languages, with comparative statements giving emphasis to the compatible items in the two systems. This research work uses comparative analysis theory to analyze adjective and adjectival phrases in Hausa and Yorùbá languages. The Hausa language belongs to the Chadic family of the Afro-Asiatic phylum, while the Yorùbá language belongs to the Benue-Congo family of the Niger-Congo phylum. The findings of the research clearly demonstrated that there are significant similarities in the adjectival phrase constructions of the two languages, i.e., nominal (Head) and post-nominal (Post-Head) use of the adjective, predicative function of an adjective, use of the reduplicative adjective, use of the comparative and superlative adjective, etc. However, there are dissimilarities in the adjectival phrase of the two languages in gender/number agreement and pre-nominal (Post-Head) use of adjectives.Keywords: genetic affinity, contrastive analysis, phylum, pre-head, post-head
Procedia PDF Downloads 2358551 Effects of Artificial Intelligence and Machine Learning on Social Media for Health Organizations
Authors: Ricky Leung
Abstract:
Artificial intelligence (AI) and machine learning (ML) have revolutionized the way health organizations approach social media. The sheer volume of data generated through social media can be overwhelming, but AI and ML can help organizations effectively manage this information to improve the health and well-being of individuals and communities. One way AI can be used to enhance social media in health organizations is through sentiment analysis. This involves analyzing the emotions expressed in social media posts to better understand public opinion and respond accordingly. This can help organizations gauge the impact of their campaigns, track the spread of misinformation, and improve communication with the public. While social media is a useful tool, researchers and practitioners have expressed fear that it will be used for the spread of misinformation, which can have serious consequences for public health. Health organizations must work to ensure that AI systems are transparent, trustworthy, and unbiased so they can help minimize the spread of misinformation. In conclusion, AI and ML have the potential to greatly enhance the use of social media in health organizations. These technologies can help organizations effectively manage large amounts of data and understand stakeholders' sentiments. However, it is important to carefully consider the potential consequences and ensure that these systems are carefully designed to minimize the spread of misinformation.Keywords: AI, ML, social media, health organizations
Procedia PDF Downloads 948550 Knowledge Management as Tool for Environmental Management System Implementation in Higher Education Institutions
Authors: Natalia Marulanda Grisales
Abstract:
The most significant changes in the characteristics of consumers have contributed to the development and adoption of methodologies and tools that enable organizations to be more competitive in the marketplace. One of these methodologies is the integration of Knowledge Management (KM) phases and Environmental Management Systems (EMS). This integration allows companies to manage and share the required knowledge for EMS adoption, from the place where it is generated to the place where it is going to be exploited. The aim of this paper is to identify the relationship between KM phases as a tool for the adoption of EMS in HEI. The methodology has a descriptive scope and a qualitative approach. It is based on a case study and a review of the literature about KM and EMS. We conducted 266 surveys to students, professors and staff at Minuto de Dios University (Colombia). Data derived from the study indicate that if a HEI wants to achieve an adequate knowledge acquisition and knowledge transfer, it must have clear goals for implementing an EMS. Also, HEI should create empowerment and training spaces for students, professors and staff. In the case study, HEI must generate alternatives that enhance spaces of knowledge appropriation. It was found that 85% of respondents have not received any training from HEI about EMS. 88% of respondents believe that the actions taken by the university are not efficient to knowledge transfer in order to develop an EMS.Keywords: environmental management systems, higher education institutions, knowledge management, training
Procedia PDF Downloads 3758549 Numerical and Experimental Study of Heat Transfer Enhancement with Metal Foams and Ultrasounds
Authors: L. Slimani, A. Bousri, A. Hamadouche, H. Ben Hamed
Abstract:
The aim of this experimental and numerical study is to analyze the effects of acoustic streaming generated by 40 kHz ultrasonic waves on heat transfer in forced convection, with and without 40 PPI aluminum metal foam. Preliminary dynamic and thermal studies were done with COMSOL Multiphase, to see heat transfer enhancement degree by inserting a 40PPI metal foam (10 × 2 × 3 cm) on a heat sink, after having determined experimentally its permeability and Forchheimer's coefficient. The results obtained numerically are in accordance with those obtained experimentally, with an enhancement factor of 205% for a velocity of 0.4 m/s compared to an empty channel. The influence of 40 kHz ultrasound on heat transfer was also tested with and without metallic foam. Results show a remarkable increase in Nusselt number in an empty channel with an enhancement factor of 37,5%, while no influence of ultrasound on heat transfer in metal foam presence.Keywords: acoustic streaming, enhancing heat transfer, laminar flow, metal foam, ultrasound
Procedia PDF Downloads 1428548 External Validation of Established Pre-Operative Scoring Systems in Predicting Response to Microvascular Decompression for Trigeminal Neuralgia
Authors: Kantha Siddhanth Gujjari, Shaani Singhal, Robert Andrew Danks, Adrian Praeger
Abstract:
Background: Trigeminal neuralgia (TN) is a heterogenous pain syndrome characterised by short paroxysms of lancinating facial pain in the distribution of the trigeminal nerve, often triggered by usually innocuous stimuli. TN has a low prevalence of less than 0.1%, of which 80% to 90% is caused by compression of the trigeminal nerve from an adjacent artery or vein. The root entry zone of the trigeminal nerve is most sensitive to neurovascular conflict (NVC), causing dysmyelination. Whilst microvascular decompression (MVD) is an effective treatment for TN with NVC, all patients do not achieve long-term pain relief. Pre-operative scoring systems by Panczykowski and Hardaway have been proposed but have not been externally validated. These pre-operative scoring systems are composite scores calculated according to a subtype of TN, presence and degree of neurovascular conflict, and response to medical treatments. There is discordance in the assessment of NVC identified on pre-operative magnetic resonance imaging (MRI) between neurosurgeons and radiologists. To our best knowledge, the prognostic impact for MVD of this difference of interpretation has not previously been investigated in the form of a composite scoring system such as those suggested by Panczykowski and Hardaway. Aims: This study aims to identify prognostic factors and externally validate the proposed scoring systems by Panczykowski and Hardaway for TN. A secondary aim is to investigate the prognostic difference between a neurosurgeon's interpretation of NVC on MRI compared with a radiologist’s. Methods: This retrospective cohort study included 95 patients who underwent de novo MVD in a single neurosurgical unit in Melbourne. Data was recorded from patients’ hospital records and neurosurgeon’s correspondence from perioperative clinic reviews. Patient demographics, type of TN, distribution of TN, response to carbamazepine, neurosurgeon, and radiologist interpretation of NVC on MRI, were clearly described prospectively and preoperatively in the correspondence. Scoring systems published by Panczykowski et al. and Hardaway et al. were used to determine composite scores, which were compared with the recurrence of TN recorded during follow-up over 1-year. Categorical data analysed using Pearson chi-square testing. Independent numerical and nominal data analysed with logistical regression. Results: Logistical regression showed that a Panczykowski composite score of greater than 3 points was associated with a higher likelihood of pain-free outcome 1-year post-MVD with an OR 1.81 (95%CI 1.41-2.61, p=0.032). The composite score using neurosurgeon’s impression of NVC had an OR 2.96 (95%CI 2.28-3.31, p=0.048). A Hardaway composite score of greater than 2 points was associated with a higher likelihood of pain-free outcome 1 year post-MVD with an OR 3.41 (95%CI 2.58-4.37, p=0.028). The composite score using neurosurgeon’s impression of NVC had an OR 3.96 (95%CI 3.01-4.65, p=0.042). Conclusion: Composite scores developed by Panczykowski and Hardaway were validated for the prediction of response to MVD in TN. A composite score based on the neurosurgeon’s interpretation of NVC on MRI, when compared with the radiologist’s had a greater correlation with pain-free outcomes 1 year post-MVD.Keywords: de novo microvascular decompression, neurovascular conflict, prognosis, trigeminal neuralgia
Procedia PDF Downloads 78