Search results for: network driver
928 Immune Complex Components Act as Agents in Relapsing Fever Borrelia Mediated Rosette Formation
Authors: Mukunda Upreti, Jill Storry, Rafael Björk, Emilie Louvet, Johan Normark, Sven Bergström
Abstract:
Borrelia duttonii and most other relapsing fever species are Gram-negative bacteria which cause a blood borne infection characterized by the binding of bacterium to erythrocytes. The bacteria associate with two or more erythrocytes to form clusters of cells into rosettes. Rosetting is a major virulence factor and the mechanism is believed to facilitate persistence of bacteria in the circulatory system and the avoidance of host immune cells through masking or steric hindrance effects. However, the molecular mechanisms of rosette formation are still poorly understood. This study aims at determining the molecules involved in the rosette formation phenomenon. Fractionated serum, using different affinity purification methods, was investigated as a rosetting agent and IgG and at least one other serum components were needed for rosettes to form. An IgG titration curve demonstrated that IgG alone is not enough to restore rosette formation level to the level whole serum gives. IgG hydrolysis by IdeS ( Immunoglobulin G-degrading enzyme of Streptococcus pyogenes) and deglycosylation using N-Glycanase proved that the whole IgG molecule regardless of saccharide moieties is critical for Borrelia induced rosetting. Complement components C3 and C4 were also important serum molecules necessary to maintain optimum rosetting rates. The deactivation of complement network and serum depletion with C3 and C4 significantly reduced the rosette formation rate. The dependency of IgG and complement components also implied involvement of the complement receptor (CR1). Rosette formation test with Knops null RBC and sCR1 confirmed that CR1 is also part of Borrelia induced rosette formation.Keywords: complement components C3 and C4, complement receptor 1, Immunoglobulin G, Knops null, Rosetting
Procedia PDF Downloads 324927 Aggregation of Electric Vehicles for Emergency Frequency Regulation of Two-Area Interconnected Grid
Authors: S. Agheb, G. Ledwich, G.Walker, Z.Tong
Abstract:
Frequency control has become more of concern for reliable operation of interconnected power systems due to the integration of low inertia renewable energy sources to the grid and their volatility. Also, in case of a sudden fault, the system has less time to recover before widespread blackouts. Electric Vehicles (EV)s have the potential to cooperate in the Emergency Frequency Regulation (EFR) by a nonlinear control of the power system in case of large disturbances. The time is not adequate to communicate with each individual EV on emergency cases, and thus, an aggregate model is necessary for a quick response to prevent from much frequency deviation and the occurrence of any blackout. In this work, an aggregate of EVs is modelled as a big virtual battery in each area considering various aspects of uncertainty such as the number of connected EVs and their initial State of Charge (SOC) as stochastic variables. A control law was proposed and applied to the aggregate model using Lyapunov energy function to maximize the rate of reduction of total kinetic energy in a two-area network after the occurrence of a fault. The control methods are primarily based on the charging/ discharging control of available EVs as shunt capacity in the distribution system. Three different cases were studied considering the locational aspect of the model with the virtual EV either in the center of the two areas or in the corners. The simulation results showed that EVs could help the generator lose its kinetic energy in a short time after a contingency. Earlier estimation of possible contributions of EVs can help the supervisory control level to transmit a prompt control signal to the subsystems such as the aggregator agents and the grid. Thus, the percentage of EVs contribution for EFR will be characterized in the future as the goal of this study.Keywords: emergency frequency regulation, electric vehicle, EV, aggregation, Lyapunov energy function
Procedia PDF Downloads 100926 Nonlinear Estimation Model for Rail Track Deterioration
Authors: M. Karimpour, L. Hitihamillage, N. Elkhoury, S. Moridpour, R. Hesami
Abstract:
Rail transport authorities around the world have been facing a significant challenge when predicting rail infrastructure maintenance work for a long period of time. Generally, maintenance monitoring and prediction is conducted manually. With the restrictions in economy, the rail transport authorities are in pursuit of improved modern methods, which can provide precise prediction of rail maintenance time and location. The expectation from such a method is to develop models to minimize the human error that is strongly related to manual prediction. Such models will help them in understanding how the track degradation occurs overtime under the change in different conditions (e.g. rail load, rail type, rail profile). They need a well-structured technique to identify the precise time that rail tracks fail in order to minimize the maintenance cost/time and secure the vehicles. The rail track characteristics that have been collected over the years will be used in developing rail track degradation prediction models. Since these data have been collected in large volumes and the data collection is done both electronically and manually, it is possible to have some errors. Sometimes these errors make it impossible to use them in prediction model development. This is one of the major drawbacks in rail track degradation prediction. An accurate model can play a key role in the estimation of the long-term behavior of rail tracks. Accurate models increase the track safety and decrease the cost of maintenance in long term. In this research, a short review of rail track degradation prediction models has been discussed before estimating rail track degradation for the curve sections of Melbourne tram track system using Adaptive Network-based Fuzzy Inference System (ANFIS) model.Keywords: ANFIS, MGT, prediction modeling, rail track degradation
Procedia PDF Downloads 337925 A Challenge to Acquire Serious Victims’ Locations during Acute Period of Giant Disasters
Authors: Keiko Shimazu, Yasuhiro Maida, Tetsuya Sugata, Daisuke Tamakoshi, Kenji Makabe, Haruki Suzuki
Abstract:
In this paper, we report how to acquire serious victims’ locations in the Acute Stage of Large-scale Disasters, in an Emergency Information Network System designed by us. The background of our concept is based on the Great East Japan Earthquake occurred on March 11th, 2011. Through many experiences of national crises caused by earthquakes and tsunamis, we have established advanced communication systems and advanced disaster medical response systems. However, Japan was devastated by huge tsunamis swept a vast area of Tohoku causing a complete breakdown of all the infrastructures including telecommunications. Therefore, we noticed that we need interdisciplinary collaboration between science of disaster medicine, regional administrative sociology, satellite communication technology and systems engineering experts. Communication of emergency information was limited causing a serious delay in the initial rescue and medical operation. For the emergency rescue and medical operations, the most important thing is to identify the number of casualties, their locations and status and to dispatch doctors and rescue workers from multiple organizations. In the case of the Tohoku earthquake, the dispatching mechanism and/or decision support system did not exist to allocate the appropriate number of doctors and locate disaster victims. Even though the doctors and rescue workers from multiple government organizations have their own dedicated communication system, the systems are not interoperable.Keywords: crisis management, disaster mitigation, messing, MGRS, military grid reference system, satellite communication system
Procedia PDF Downloads 236924 Implementation of an Image Processing System Using Artificial Intelligence for the Diagnosis of Malaria Disease
Authors: Mohammed Bnebaghdad, Feriel Betouche, Malika Semmani
Abstract:
Image processing become more sophisticated over time due to technological advances, especially artificial intelligence (AI) technology. Currently, AI image processing is used in many areas, including surveillance, industry, science, and medicine. AI in medical image processing can help doctors diagnose diseases faster, with minimal mistakes, and with less effort. Among these diseases is malaria, which remains a major public health challenge in many parts of the world. It affects millions of people every year, particularly in tropical and subtropical regions. Early detection of malaria is essential to prevent serious complications and reduce the burden of the disease. In this paper, we propose and implement a scheme based on AI image processing to enhance malaria disease diagnosis through automated analysis of blood smear images. The scheme is based on the convolutional neural network (CNN) method. So, we have developed a model that classifies infected and uninfected single red cells using images available on Kaggle, as well as real blood smear images obtained from the Central Laboratory of Medical Biology EHS Laadi Flici (formerly El Kettar) in Algeria. The real images were segmented into individual cells using the watershed algorithm in order to match the images from the Kaagle dataset. The model was trained and tested, achieving an accuracy of 99% and 97% accuracy for new real images. This validates that the model performs well with new real images, although with slightly lower accuracy. Additionally, the model has been embedded in a Raspberry Pi4, and a graphical user interface (GUI) was developed to visualize the malaria diagnostic results and facilitate user interaction.Keywords: medical image processing, malaria parasite, classification, CNN, artificial intelligence
Procedia PDF Downloads 23923 Future Sustainable Mobility for Colorado
Authors: Paolo Grazioli
Abstract:
In this paper, we present the main results achieved during an eight-week international design project on Colorado Future Sustainable Mobilitycarried out at Metropolitan State University of Denver. The project was born with the intention to seize the opportunity created by the Colorado government’s plan to promote e-bikes mobility by creating a large network of dedicated tracks. The project was supported by local entrepreneurs who offered financial and professional support. The main goal of the project was to engage design students with the skills to design a user-centered, original vehicle that would satisfy the unarticulated practical and emotional needs of “Gen Z” users by creating a fun, useful, and reliablelife companion that would helps users carry out their everyday tasks in a practical and enjoyable way. The project was carried out with the intention of proving the importance of the combination of creative methods with practical design methodologies towards the creation of an innovative yet immediately manufacturable product for a more sustainable future. The final results demonstrate the students' capability to create innovative and yet manufacturable products and, especially, their ability to create a new design paradigm for future sustainable mobility products. The design solutions explored n the project include collaborative learning and human-interaction design for future mobility. The findings of the research led students to the fabrication of two working prototypes that will be tested in Colorado and developed for manufacturing in the year 2024. The project showed that collaborative design and project-based teaching improve the quality of the outcome and can lead to the creation of real life, innovative products directly from the classroom to the market.Keywords: sustainable transportation design, interface design, collaborative design, user -centered design research, design prototyping
Procedia PDF Downloads 98922 Prediction of Remaining Life of Industrial Cutting Tools with Deep Learning-Assisted Image Processing Techniques
Authors: Gizem Eser Erdek
Abstract:
This study is research on predicting the remaining life of industrial cutting tools used in the industrial production process with deep learning methods. When the life of cutting tools decreases, they cause destruction to the raw material they are processing. This study it is aimed to predict the remaining life of the cutting tool based on the damage caused by the cutting tools to the raw material. For this, hole photos were collected from the hole-drilling machine for 8 months. Photos were labeled in 5 classes according to hole quality. In this way, the problem was transformed into a classification problem. Using the prepared data set, a model was created with convolutional neural networks, which is a deep learning method. In addition, VGGNet and ResNet architectures, which have been successful in the literature, have been tested on the data set. A hybrid model using convolutional neural networks and support vector machines is also used for comparison. When all models are compared, it has been determined that the model in which convolutional neural networks are used gives successful results of a %74 accuracy rate. In the preliminary studies, the data set was arranged to include only the best and worst classes, and the study gave ~93% accuracy when the binary classification model was applied. The results of this study showed that the remaining life of the cutting tools could be predicted by deep learning methods based on the damage to the raw material. Experiments have proven that deep learning methods can be used as an alternative for cutting tool life estimation.Keywords: classification, convolutional neural network, deep learning, remaining life of industrial cutting tools, ResNet, support vector machine, VggNet
Procedia PDF Downloads 79921 A Review on the Hydrologic and Hydraulic Performances in Low Impact Development-Best Management Practices Treatment Train
Authors: Fatin Khalida Abdul Khadir, Husna Takaijudin
Abstract:
Bioretention system is one of the alternatives to approach the conventional stormwater management, low impact development (LID) strategy for best management practices (BMPs). Incorporating both filtration and infiltration, initial research on bioretention systems has shown that this practice extensively decreases runoff volumes and peak flows. The LID-BMP treatment train is one of the latest LID-BMPs for stormwater treatments in urbanized watersheds. The treatment train is developed to overcome the drawbacks that arise from conventional LID-BMPs and aims to enhance the performance of the existing practices. In addition, it is also used to improve treatments in both water quality and water quantity controls as well as maintaining the natural hydrology of an area despite the current massive developments. The objective of this paper is to review the effectiveness of the conventional LID-BMPS on hydrologic and hydraulic performances through column studies in different configurations. The previous studies on the applications of LID-BMP treatment train that were developed to overcome the drawbacks of conventional LID-BMPs are reviewed and use as the guidelines for implementing this system in Universiti Teknologi Petronas (UTP) and elsewhere. The reviews on the analysis conducted for hydrologic and hydraulic performances using the artificial neural network (ANN) model are done in order to be utilized in this study. In this study, the role of the LID-BMP treatment train is tested by arranging bioretention cells in series in order to be implemented for controlling floods that occurred currently and in the future when the construction of the new buildings in UTP completed. A summary of the research findings on the performances of the system is provided which includes the proposed modifications on the designs.Keywords: bioretention system, LID-BMP treatment train, hydrological and hydraulic performance, ANN analysis
Procedia PDF Downloads 119920 Crossing Multi-Source Climate Data to Estimate the Effects of Climate Change on Evapotranspiration Data: Application to the French Central Region
Authors: Bensaid A., Mostephaoui T., Nedjai R.
Abstract:
Climatic factors are the subject of considerable research, both methodologically and instrumentally. Under the effect of climate change, the approach to climate parameters with precision remains one of the main objectives of the scientific community. This is from the perspective of assessing climate change and its repercussions on humans and the environment. However, many regions of the world suffer from a severe lack of reliable instruments that can make up for this deficit. Alternatively, the use of empirical methods becomes the only way to assess certain parameters that can act as climate indicators. Several scientific methods are used for the evaluation of evapotranspiration which leads to its evaluation either directly at the level of the climatic stations or by empirical methods. All these methods make a point approach and, in no case, allow the spatial variation of this parameter. We, therefore, propose in this paper the use of three sources of information (network of weather stations of Meteo France, World Databases, and Moodis satellite images) to evaluate spatial evapotranspiration (ETP) using the Turc method. This first step will reflect the degree of relevance of the indirect (satellite) methods and their generalization to sites without stations. The spatial variation representation of this parameter using the geographical information system (GIS) accounts for the heterogeneity of the behaviour of this parameter. This heterogeneity is due to the influence of site morphological factors and will make it possible to appreciate the role of certain topographic and hydrological parameters. A phase of predicting the evolution over the medium and long term of evapotranspiration under the effect of climate change by the application of the Intergovernmental Panel on Climate Change (IPCC) scenarios gives a realistic overview as to the contribution of aquatic systems to the scale of the region.Keywords: climate change, ETP, MODIS, GIEC scenarios
Procedia PDF Downloads 102919 Analyzing Environmental Emotive Triggers in Terrorist Propaganda
Authors: Travis Morris
Abstract:
The purpose of this study is to measure the intersection of environmental security entities in terrorist propaganda. To the best of author’s knowledge, this is the first study of its kind to examine this intersection within terrorist propaganda. Rosoka, natural language processing software and frame analysis are used to advance our understanding of how environmental frames function as emotive triggers. Violent jihadi demagogues use frames to suggest violent and non-violent solutions to their grievances. Emotive triggers are framed in a way to leverage individual and collective attitudes in psychological warfare. A comparative research design is used because of the differences and similarities that exist between two variants of violent jihadi propaganda that target western audiences. Analysis is based on salience and network text analysis, which generates violent jihadi semantic networks. Findings indicate that environmental frames are used as emotive triggers across both data sets, but also as tactical and information data points. A significant finding is that certain core environmental emotive triggers like “water,” “soil,” and “trees” are significantly salient at the aggregate level across both data sets. All environmental entities can be classified into two categories, symbolic and literal. Importantly, this research illustrates how demagogues use environmental emotive triggers in cyber space from a subcultural perspective to mobilize target audiences to their ideology and praxis. Understanding the anatomy of propaganda construction is necessary in order to generate effective counter narratives in information operations. This research advances an additional method to inform practitioners and policy makers of how environmental security and propaganda intersect.Keywords: propaganda analysis, emotive triggers environmental security, frames
Procedia PDF Downloads 140918 Internal Evaluation of Architecture University Department in Architecture Engineering Bachelor's Level: A Case from Iran
Authors: Faranak Omidian
Abstract:
This study has been carried out to examine the status of architecture department at bachelor's level of engineering architecture in Islamic Azad University of Dezful in 2012-13 academic year. The present research is a descriptive cross sectional study and in terms of measurement, it is descriptive and analytical, which was done based on 7 steps and in 7 areas with 32 criteria and 169 indicators. The sample includes 201 students, 14 faculty members, 72 graduates and 39 employers. Simple random sampling method, complete enumeration method, network sampling (snowball sampling) were used for students, faculty members and graduates respectively. All sample responded to the questions. After data collection, the findings were ranked on Likert scale from desirable to undesirable with the scores ranging from 1 to 3.The results showed that the department with a score of 1.88 in regard to objectives, organizational status, management and organizations, with a score of 2 in relation to students, with a score of 1.8 in regard to faculty members was in a relatively desirable status. Regarding training courses and curriculum, it gained a score of 2.33 which indicates the desirable status of the department in this regard. It gained scores of 1.75, 2, and 1.8 with respect to educational and research facilities and equipment, teaching and learning strategies, and graduates respectively, all of which shows the relatively desirable status of the department. The results showed that the department of architecture, with an average score of 2.14 in all evaluated areas, was in a desirable situation. Therefore, although the department generally has a desirable status, it needs to put in more effort to tackle its weaknesses and shortages and corrects its defects in order to promote educational quality, taking to the desirable level.Keywords: internal evaluation, architecture department in Islamic, Azad University, Dezful
Procedia PDF Downloads 444917 Impacts of Hydrologic and Topographic Changes on Water Regime Evolution of Poyang Lake, China
Authors: Feng Huang, Carlos G. Ochoa, Haitao Zhao
Abstract:
Poyang Lake, the largest freshwater lake in China, is located at the middle-lower reaches of the Yangtze River basin. It has great value in socioeconomic development and is internationally recognized as an important lacustrine and wetland ecosystem with abundant biodiversity. Impacted by ongoing climate change and anthropogenic activities, especially the regulation of the Three Gorges Reservoir since 2003, Poyang Lake has experienced significant water regime evolution, resulting in challenges for the management of water resources and the environment. Quantifying the contribution of hydrologic and topographic changes to water regime alteration is necessary for policymakers to design effective adaption strategies. Long term hydrologic data were collected and the back-propagation neural networks were constructed to simulate the lake water level. The impacts of hydrologic and topographic changes were differentiated through scenario analysis that considered pre-impact and post-impact hydrologic and topographic scenarios. The lake water regime was characterized by hydrologic indicators that describe monthly water level fluctuations, hydrologic features during flood and drought seasons, and frequency and rate of hydrologic variations. The results revealed different contributions of hydrologic and topographic changes to different features of the lake water regime.Noticeable changes were that the water level declined dramatically during the period of reservoir impoundment, and the drought was enhanced during the dry season. The hydrologic and topographic changes exerted a synergistic effect or antagonistic effect on different lake water regime features. The findings provide scientific reference for lacustrine and wetland ecological protection associated with water regime alterations.Keywords: back-propagation neural network, scenario analysis, water regime, Poyang Lake
Procedia PDF Downloads 141916 Seismological Studies in Some Areas in Egypt
Authors: Gamal Seliem, Hassan Seliem
Abstract:
Aswan area is one of the important areas in Egypt and because it encompasses the vital engineering structure of the High dam, so it has been selected for the present study. The study of the crustal deformation and gravity associated with earthquake activity in the High Dam area of great importance for the safety of the High Dam and its economic resources. This paper deals with using micro-gravity, precise leveling and GPS data for geophysical and geodetically studies. For carrying out the detailed gravity survey in the area, were established for studying the subsurface structures. To study the recent vertical movements, a profile of 10 km length joins the High Dam and Aswan old dam were established along the road connecting the two dams. This profile consists of 35 GPS/leveling stations extending along the two sides of the road and on the High Dam body. Precise leveling was carried out with GPS and repeated micro-gravity survey in the same time. GPS network consisting of nine stations was established for studying the recent crustal movements. Many campaigns from December 2001 to December 2014 were performed for collecting the gravity, leveling and GPS data. The main aim of this work is to study the structural features and the behavior of the area, as depicted from repeated micro-gravity, precise leveling and GPS measurements. The present work focuses on the analysis of the gravity, leveling and GPS data. The gravity results of the present study investigate and analyze the subsurface geologic structures and reveal to there be minor structures; features and anomalies are taking W-E and N-S directions. The geodetic results indicated lower rates of the vertical and horizontal displacements and strain values. This may be related to the stability of the area.Keywords: repeated micro-gravity changes, precise leveling, GPS data, Aswan High Dam
Procedia PDF Downloads 449915 Ultra-Reliable Low Latency V2X Communication for Express Way Using Multiuser Scheduling Algorithm
Authors: Vaishali D. Khairnar
Abstract:
The main aim is to provide lower-latency and highly reliable communication facilities for vehicles in the automobile industry; vehicle-to-everything (V2X) communication basically intends to increase expressway road security and its effectiveness. The Ultra-Reliable Low-Latency Communications (URLLC) algorithm and cellular networks are applied in combination with Mobile Broadband (MBB). This is particularly used in express way safety-based driving applications. Expressway vehicle drivers (humans) will communicate in V2X systems using the sixth-generation (6G) communication systems which have very high-speed mobility features. As a result, we need to determine how to ensure reliable and consistent wireless communication links and improve the quality to increase channel gain, which is becoming a challenge that needs to be addressed. To overcome this challenge, we proposed a unique multi-user scheduling algorithm for ultra-massive multiple-input multiple-output (MIMO) systems using 6G. In wideband wireless network access in case of high traffic and also in medium traffic conditions, moreover offering quality-of-service (QoS) to distinct service groups with synchronized contemporaneous traffic on the highway like the Mumbai-Pune expressway becomes a critical problem. Opportunist MAC (OMAC) is a way of proposing communication across a wireless communication link that can change in space and time and might overcome the above-mentioned challenge. Therefore, a multi-user scheduling algorithm is proposed for MIMO systems using a cross-layered MAC protocol to achieve URLLC and high reliability in V2X communication.Keywords: ultra-reliable low latency communications, vehicle-to-everything communication, multiple-input multiple-output systems, multi-user scheduling algorithm
Procedia PDF Downloads 90914 Use of Geosynthetics as Reinforcement Elements in Unpaved Tertiary Roads
Authors: Vivian A. Galindo, Maria C. Galvis, Jaime R. Obando, Alvaro Guarin
Abstract:
In Colombia, most of the roads of the national tertiary road network are unpaved roads with granular rolling surface. These are very important ways of guaranteeing the mobility of people, products, and inputs from the agricultural sector from the most remote areas to urban centers; however, it has not paid much attention to the search for alternatives to avoid the occurrence of deteriorations that occur shortly after its commissioning. In recent years, geosynthetics have been used satisfactorily to reinforce unpaved roads on soft soils, with geotextiles and geogrids being the most widely used. The interaction of the geogrid and the aggregate minimizes the lateral movement of the aggregate particles and increases the load capacity of the material, which leads to a better distribution of the vertical stresses, consequently reducing the vertical deformations in the subgrade. Taking into account the above, the research aimed at the mechanical behavior of the granular material, used in unpaved roads with and without the presence of geogrids, from the development of laboratory tests through the loaded wheel tester (LWT). For comparison purposes, the reinforced conditions and traffic conditions to which this type of material can be accessed in practice were simulated. In total four types of geogrids, were tested with granular material; this means that five test sets, the reinforced material and the non-reinforced control sample were evaluated. The results of the numbers of load cycles and depth rutting supported by each test body showed the influence of the properties of the reinforcement on the mechanical behavior of the assembly and the significant increases in the number of load cycles of the reinforced specimens in relation to those without reinforcement.Keywords: geosynthetics, load wheel tester LWT, tertiary roads, unpaved road, vertical deformation
Procedia PDF Downloads 250913 Efficiency of a Molecularly Imprinted Polymer for Selective Removal of Chlorpyrifos from Water Samples
Authors: Oya A. Urucu, Aslı B. Çiğil, Hatice Birtane, Ece K. Yetimoğlu, Memet Vezir Kahraman
Abstract:
Chlorpyrifos is an organophosphorus pesticide which can be found in environmental water samples. The efficiency and reuse of a molecularly imprinted polymer (chlorpyrifos - MIP) were investigated for the selective removal of chlorpyrifos residues. MIP was prepared with UV curing thiol-ene polymerization technology by using multifunctional thiol and ene monomers. The thiol-ene curing reaction is a radical induced process, however unlike other photoinitiated polymerization processes, this polymerization process is a free-radical reaction that proceeds by a step-growth mechanism, involving two main steps; a free-radical addition followed by a chain transfer reaction. It assures a very rapidly formation of a uniform crosslinked network with low shrinkage, reduced oxygen inhibition during curing and excellent adhesion. In this study, thiol-ene based UV-curable polymeric materials were prepared by mixing pentaerythritol tetrakis(3-mercaptopropionate), glyoxal bis diallyl acetal, polyethylene glycol diacrylate (PEGDA) and photoinitiator. Chlorpyrifos was added at a definite ratio to the prepared formulation. Chemical structure and thermal properties were characterized by FTIR and thermogravimetric analysis (TGA), respectively. The pesticide analysis was performed by gas chromatography-mass spectrometry (GC-MS). The influences of some analytical parameters such as pH, sample volume, amounts of analyte concentration were studied for the quantitative recoveries of the analyte. The proposed MIP method was applied to the determination of chlorpyrifos in river and tap water samples. The use of the MIP provided a selective and easy solution for removing chlorpyrifos from the water.Keywords: molecularly imprinted polymers, selective removal, thilol-ene, uv-curable polymer
Procedia PDF Downloads 302912 The Event of Extreme Precipitation Occurred in the Metropolitan Mesoregion of the Capital of Para
Authors: Natasha Correa Vitória Bandeira, Lais Cordeiro Soares, Claudineia Brazil, Luciane Teresa Salvi
Abstract:
The intense rain event that occurred between February 16 and 18, 2018, in the city of Barcarena in Pará, located in the North region of Brazil, demonstrates the importance of analyzing this type of event. The metropolitan mesoregion of Belem was severely punished by rains much above the averages normally expected for that time of year; this phenomenon affected, in addition to the capital, the municipalities of Barcarena, Murucupi and Muruçambá. Resulting in a great flood in the rivers of the region, whose basins were affected with great intensity of precipitation, causing concern for the local population because in this region, there are located companies that accumulate ore tailings, and in this specific case, the dam of any of these companies, leaching the ore to the water bodies of the Murucupi River Basin. This article aims to characterize this phenomenon through a special analysis of the distribution of rainfall, using data from atmospheric soundings, satellite images, radar images and data from the GPCP (Global Precipitation Climatology Project), in addition to rainfall stations located in the study region. The results of the work demonstrated a dissociation between the data measured in the meteorological stations and the other forms of analysis of this extreme event. Monitoring carried out solely on the basis of data from pluviometric stations is not sufficient for monitoring and/or diagnosing extreme weather events, and investment by the competent bodies is important to install a larger network of pluviometric stations sufficient to meet the demand in a given region.Keywords: extreme precipitation, great flood, GPCP, ore dam
Procedia PDF Downloads 108911 Cybersecurity Challenges in the Era of Open Banking
Authors: Krish Batra
Abstract:
The advent of open banking has revolutionized the financial services industry by fostering innovation, enhancing customer experience, and promoting competition. However, this paradigm shift towards more open and interconnected banking ecosystems has introduced complex cybersecurity challenges. This research paper delves into the multifaceted cybersecurity landscape of open banking, highlighting the vulnerabilities and threats inherent in sharing financial data across a network of banks and third-party providers. Through a detailed analysis of recent data breaches, phishing attacks, and other cyber incidents, the paper assesses the current state of cybersecurity within the open banking framework. It examines the effectiveness of existing security measures, such as encryption, API security protocols, and authentication mechanisms, in protecting sensitive financial information. Furthermore, the paper explores the regulatory response to these challenges, including the implementation of standards such as PSD2 in Europe and similar initiatives globally. By identifying gaps in current cybersecurity practices, the research aims to propose a set of robust, forward-looking strategies that can enhance the security and resilience of open banking systems. This includes recommendations for banks, third-party providers, regulators, and consumers on how to mitigate risks and ensure a secure open banking environment. The ultimate goal is to provide stakeholders with a comprehensive understanding of the cybersecurity implications of open banking and to outline actionable steps for safeguarding the financial ecosystem in an increasingly interconnected world.Keywords: open banking, financial services industry, cybersecurity challenges, data breaches, phishing attacks, encryption, API security protocols, authentication mechanisms, regulatory response, PSD2, cybersecurity practices
Procedia PDF Downloads 62910 Signs, Signals and Syndromes: Algorithmic Surveillance and Global Health Security in the 21st Century
Authors: Stephen L. Roberts
Abstract:
This article offers a critical analysis of the rise of syndromic surveillance systems for the advanced detection of pandemic threats within contemporary global health security frameworks. The article traces the iterative evolution and ascendancy of three such novel syndromic surveillance systems for the strengthening of health security initiatives over the past two decades: 1) The Program for Monitoring Emerging Diseases (ProMED-mail); 2) The Global Public Health Intelligence Network (GPHIN); and 3) HealthMap. This article demonstrates how each newly introduced syndromic surveillance system has become increasingly oriented towards the integration of digital algorithms into core surveillance capacities to continually harness and forecast upon infinitely generating sets of digital, open-source data, potentially indicative of forthcoming pandemic threats. This article argues that the increased centrality of the algorithm within these next-generation syndromic surveillance systems produces a new and distinct form of infectious disease surveillance for the governing of emergent pathogenic contingencies. Conceptually, the article also shows how the rise of this algorithmic mode of infectious disease surveillance produces divergences in the governmental rationalities of global health security, leading to the rise of an algorithmic governmentality within contemporary contexts of Big Data and these surveillance systems. Empirically, this article demonstrates how this new form of algorithmic infectious disease surveillance has been rapidly integrated into diplomatic, legal, and political frameworks to strengthen the practice of global health security – producing subtle, yet distinct shifts in the outbreak notification and reporting transparency of states, increasingly scrutinized by the algorithmic gaze of syndromic surveillance.Keywords: algorithms, global health, pandemic, surveillance
Procedia PDF Downloads 187909 Supervisory Controller with Three-State Energy Saving Mode for Induction Motor in Fluid Transportation
Authors: O. S. Ebrahim, K. O. Shawky, M. O. S. Ebrahim, P. K. Jain
Abstract:
Induction Motor (IM) driving pump is the main consumer of electricity in a typical fluid transportation system (FTS). It was illustrated that changing the connection of the stator windings from delta to star at no load could achieve noticeable active and reactive energy savings. This paper proposes a supervisory hysteresis liquid-level control with three-state energy saving mode (ESM) for IM in FTS including storage tank. The IM pump drive comprises modified star/delta switch and hydromantic coupler. Three-state ESM is defined, along with the normal running, and named analog to computer ESMs as follows: Sleeping mode in which the motor runs at no load with delta stator connection, hibernate mode in which the motor runs at no load with a star connection, and motor shutdown is the third energy saver mode. A logic flow-chart is synthesized to select the motor state at no-load for best energetic cost reduction, considering the motor thermal capacity used. An artificial neural network (ANN) state estimator, based on the recurrent architecture, is constructed and learned in order to provide fault-tolerant capability for the supervisory controller. Sequential test of Wald is used for sensor fault detection. Theoretical analysis, preliminary experimental testing and, computer simulations are performed to show the effectiveness of the proposed control in terms of reliability, power quality and energy/coenergy cost reduction with the suggestion of power factor correction.Keywords: ANN, ESM, IM, star/delta switch, supervisory control, FT, reliability, power quality
Procedia PDF Downloads 197908 Numerical and Experimental Investigation of Fracture Mechanism in Paintings on Wood
Authors: Mohammad Jamalabadi, Noemi Zabari, Lukasz Bratasz
Abstract:
Panel paintings -complex multi-layer structures consisting of wood support and a paint layer composed of a preparatory layer of gesso, paints, and varnishes- are among the category of cultural objects most vulnerable to relative humidity fluctuations and frequently found in museum collections. The current environmental specifications in museums have been derived using the criterion of crack initiation in an undamaged, usually new gesso layer laid on wood. In reality, historical paintings exhibit complex crack patterns called craquelures. The present paper analyses the structural response of a paint layer with a virtual network of rectangular cracks under environmental loadings using a three-dimensional model of a panel painting. Two modes of loading are considered -one induced by one-dimensional moisture response of wood support, termed the tangential loading, and the other isotropic induced by drying shrinkage of the gesso layer. The superposition of the two modes is also analysed. The modelling showed that minimum distances between cracks parallel to the wood grain depended on the gesso stiffness under the tangential loading. In spite of a non-zero Poisson’s ratio, gesso cracks perpendicular to the wood grain could not be generated by the moisture response of wood support. The isotropic drying shrinkage of gesso produced cracks that were almost evenly spaced in both directions. The modelling results were cross-checked with crack patterns obtained on a mock-up of a panel painting exposed to a number of extreme environmental variations in an environmental chamber.Keywords: fracture saturation, surface cracking, paintings on wood, wood panels
Procedia PDF Downloads 268907 The Integration Challenges of Women Refugees in Sweden from Socio-Cultural Perspective
Authors: Khadijah Saeed Khan
Abstract:
One of the major current societal issues of Swedish society is to integrate newcomer refugees well into the host society. The cultural integration issue is one of the under debated topic in the literature, and this study intends to meet this gap from the Swedish perspective. The purpose of this study is to explore the role and types of cultural landscapes of refugee women in Sweden and how these landscapes help or hinder the settlement process. The cultural landscapes are referred to as a set of multiple cultural activities or practices which refugees perform in a specific context and circumstances (i.e., being in a new country) to seek, share or use relevant information for their settlement. Information plays a vital role in various aspects of newcomers' lives in a new country. This article has an intention to highlight the importance of multiple cultural landscapes as a source of information (regarding employment, language learning, finding accommodation, immigration matters, health concerns, school and education, family matters, and other everyday matters) for refugees to settle down in Sweden. Some relevant theories, such as information landscapes and socio-cultural theories, are considered in this study. A qualitative research design is employed, including semi-structured deep interviews and participatory observation with 20 participants. The initial findings show that the refugee women encounter many information-related and integration-related challenges in Sweden and have built a network of cultural landscapes in which they practice various co-ethnic cultural and religious activities at different times of the year. These landscapes help them to build a sense of belonging with people from their own or similar land and assist them to seek and share relevant information in everyday life in Sweden.Keywords: cultural integration, cultural landscapes, information, women refugees
Procedia PDF Downloads 142906 Optimum Dewatering Network Design Using Firefly Optimization Algorithm
Authors: S. M. Javad Davoodi, Mojtaba Shourian
Abstract:
Groundwater table close to the ground surface causes major problems in construction and mining operation. One of the methods to control groundwater in such cases is using pumping wells. These pumping wells remove excess water from the site project and lower the water table to a desirable value. Although the efficiency of this method is acceptable, it needs high expenses to apply. It means even small improvement in a design of pumping wells can lead to substantial cost savings. In order to minimize the total cost in the method of pumping wells, a simulation-optimization approach is applied. The proposed model integrates MODFLOW as the simulation model with Firefly as the optimization algorithm. In fact, MODFLOW computes the drawdown due to pumping in an aquifer and the Firefly algorithm defines the optimum value of design parameters which are numbers, pumping rates and layout of the designing wells. The developed Firefly-MODFLOW model is applied to minimize the cost of the dewatering project for the ancient mosque of Kerman city in Iran. Repetitive runs of the Firefly-MODFLOW model indicates that drilling two wells with the total rate of pumping 5503 m3/day is the result of the minimization problem. Results show that implementing the proposed solution leads to at least 1.5 m drawdown in the aquifer beneath mosque region. Also, the subsidence due to groundwater depletion is less than 80 mm. Sensitivity analyses indicate that desirable groundwater depletion has an enormous impact on total cost of the project. Besides, in a hypothetical aquifer decreasing the hydraulic conductivity contributes to decrease in total water extraction for dewatering.Keywords: groundwater dewatering, pumping wells, simulation-optimization, MODFLOW, firefly algorithm
Procedia PDF Downloads 294905 Fabrication and Characterization Analysis of La-Sr-Co-Fe-O Perovskite Hollow Fiber Catalyst for Oxygen Removal in Landfill Gas
Authors: Seong Woon Lee, Soo Min Lim, Sung Sik Jeong, Jung Hoon Park
Abstract:
The atmospheric concentration of greenhouse gas (GHG, Green House Gas) is increasing continuously as a result of the combustion of fossil fuels and industrial development. In response to this trend, many researches have been conducted on the reduction of GHG. Landfill gas (LFG, Land Fill Gas) is one of largest sources of GHG emissions containing the methane (CH₄) as a major constituent and can be considered renewable energy sources as well. In order to use LFG by connecting to the city pipe network, it required a process for removing impurities. In particular, oxygen must be removed because it can cause corrosion of pipes and engines. In this study, methane oxidation was used to eliminate oxygen from LFG and perovskite-type ceramic catalysts of La-Sr-Co-Fe-O composition was selected as a catalyst. Hollow fiber catalysts (HFC, Hollow Fiber Catalysts) have attracted attention as a new concept alternative because they have high specific surface area and mechanical strength compared to other types of catalysts. HFC was prepared by a phase-inversion/sintering technique using commercial La-Sr-Co-Fe-O powder. In order to measure the catalysts' activity, simulated LFG was used for feed gas and complete oxidation reaction of methane was confirmed. Pore structure of the HFC was confirmed by SEM image and perovskite structure of single phase was analyzed by XRD. In addition, TPR analysis was performed to verify the oxygen adsorption mechanism of the HFC. Acknowledgement—The project is supported by the ‘Global Top Environment R&D Program’ in the ‘R&D Center for reduction of Non-CO₂ Greenhouse gases’ (Development and demonstration of oxygen removal technology of landfill gas) funded by Korea Ministry of Environment (ME).Keywords: complete oxidation, greenhouse gas, hollow fiber catalyst, land fill gas, oxygen removal, perovskite catalyst
Procedia PDF Downloads 117904 Performance Evaluation of Routing Protocol in Cognitive Radio with Multi Technological Environment
Authors: M. Yosra, A. Mohamed, T. Sami
Abstract:
Over the past few years, mobile communication technologies have seen significant evolution. This fact promoted the implementation of many systems in a multi-technological setting. From one system to another, the Quality of Service (QoS) provided to mobile consumers gets better. The growing number of normalized standards extends the available services for each consumer, moreover, most of the available radio frequencies have already been allocated, such as 3G, Wifi, Wimax, and LTE. A study by the Federal Communications Commission (FCC) found that certain frequency bands are partially occupied in particular locations and times. So, the idea of Cognitive Radio (CR) is to share the spectrum between a primary user (PU) and a secondary user (SU). The main objective of this spectrum management is to achieve a maximum rate of exploitation of the radio spectrum. In general, the CR can greatly improve the quality of service (QoS) and improve the reliability of the link. The problem will reside in the possibility of proposing a technique to improve the reliability of the wireless link by using the CR with some routing protocols. However, users declared that the links were unreliable and that it was an incompatibility with QoS. In our case, we choose the QoS parameter "bandwidth" to perform a supervised classification. In this paper, we propose a comparative study between some routing protocols, taking into account the variation of different technologies on the existing spectral bandwidth like 3G, WIFI, WIMAX, and LTE. Due to the simulation results, we observe that LTE has significantly higher availability bandwidth compared with other technologies. The performance of the OLSR protocol is better than other on-demand routing protocols (DSR, AODV and DSDV), in LTE technology because of the proper receiving of packets, less packet drop and the throughput. Numerous simulations of routing protocols have been made using simulators such as NS3.Keywords: cognitive radio, multi technology, network simulator (NS3), routing protocol
Procedia PDF Downloads 63903 Innovative Predictive Modeling and Characterization of Composite Material Properties Using Machine Learning and Genetic Algorithms
Authors: Hamdi Beji, Toufik Kanit, Tanguy Messager
Abstract:
This study aims to construct a predictive model proficient in foreseeing the linear elastic and thermal characteristics of composite materials, drawing on a multitude of influencing parameters. These parameters encompass the shape of inclusions (circular, elliptical, square, triangle), their spatial coordinates within the matrix, orientation, volume fraction (ranging from 0.05 to 0.4), and variations in contrast (spanning from 10 to 200). A variety of machine learning techniques are deployed, including decision trees, random forests, support vector machines, k-nearest neighbors, and an artificial neural network (ANN), to facilitate this predictive model. Moreover, this research goes beyond the predictive aspect by delving into an inverse analysis using genetic algorithms. The intent is to unveil the intrinsic characteristics of composite materials by evaluating their thermomechanical responses. The foundation of this research lies in the establishment of a comprehensive database that accounts for the array of input parameters mentioned earlier. This database, enriched with this diversity of input variables, serves as a bedrock for the creation of machine learning and genetic algorithm-based models. These models are meticulously trained to not only predict but also elucidate the mechanical and thermal conduct of composite materials. Remarkably, the coupling of machine learning and genetic algorithms has proven highly effective, yielding predictions with remarkable accuracy, boasting scores ranging between 0.97 and 0.99. This achievement marks a significant breakthrough, demonstrating the potential of this innovative approach in the field of materials engineering.Keywords: machine learning, composite materials, genetic algorithms, mechanical and thermal proprieties
Procedia PDF Downloads 54902 Adaptive Motion Compensated Spatial Temporal Filter of Colonoscopy Video
Authors: Nidhal Azawi
Abstract:
Colonoscopy procedure is widely used in the world to detect an abnormality. Early diagnosis can help to heal many patients. Because of the unavoidable artifacts that exist in colon images, doctors cannot detect a colon surface precisely. The purpose of this work is to improve the visual quality of colonoscopy videos to provide better information for physicians by removing some artifacts. This work complements a series of work consisting of three previously published papers. In this paper, Optic flow is used for motion compensation, and then consecutive images are aligned/registered to integrate some information to create a new image that has or reveals more information than the original one. Colon images have been classified into informative and noninformative images by using a deep neural network. Then, two different strategies were used to treat informative and noninformative images. Informative images were treated by using Lucas Kanade (LK) with an adaptive temporal mean/median filter, whereas noninformative images are treated by using Lucas Kanade with a derivative of Gaussian (LKDOG) with adaptive temporal median images. A comparison result showed that this work achieved better results than that results in the state- of- the- art strategies for the same degraded colon images data set, which consists of 1000 images. The new proposed algorithm reduced the error alignment by about a factor of 0.3 with a 100% successfully image alignment ratio. In conclusion, this algorithm achieved better results than the state-of-the-art approaches in case of enhancing the informative images as shown in the results section; also, it succeeded to convert the non-informative images that have very few details/no details because of the blurriness/out of focus or because of the specular highlight dominate significant amount of an image to informative images.Keywords: optic flow, colonoscopy, artifacts, spatial temporal filter
Procedia PDF Downloads 114901 Author Profiling: Prediction of Learners’ Gender on a MOOC Platform Based on Learners’ Comments
Authors: Tahani Aljohani, Jialin Yu, Alexandra. I. Cristea
Abstract:
The more an educational system knows about a learner, the more personalised interaction it can provide, which leads to better learning. However, asking a learner directly is potentially disruptive, and often ignored by learners. Especially in the booming realm of MOOC Massive Online Learning platforms, only a very low percentage of users disclose demographic information about themselves. Thus, in this paper, we aim to predict learners’ demographic characteristics, by proposing an approach using linguistically motivated Deep Learning Architectures for Learner Profiling, particularly targeting gender prediction on a FutureLearn MOOC platform. Additionally, we tackle here the difficult problem of predicting the gender of learners based on their comments only – which are often available across MOOCs. The most common current approaches to text classification use the Long Short-Term Memory (LSTM) model, considering sentences as sequences. However, human language also has structures. In this research, rather than considering sentences as plain sequences, we hypothesise that higher semantic - and syntactic level sentence processing based on linguistics will render a richer representation. We thus evaluate, the traditional LSTM versus other bleeding edge models, which take into account syntactic structure, such as tree-structured LSTM, Stack-augmented Parser-Interpreter Neural Network (SPINN) and the Structure-Aware Tag Augmented model (SATA). Additionally, we explore using different word-level encoding functions. We have implemented these methods on Our MOOC dataset, which is the most performant one comparing with a public dataset on sentiment analysis that is further used as a cross-examining for the models' results.Keywords: deep learning, data mining, gender predication, MOOCs
Procedia PDF Downloads 149900 Smart Kids Coacher: Model for Childhood Obesity in Thailand
Authors: Pornwipa Daoduong, Jairak Loysongkroa, Napaphan Viriyautsahakul, Wachira Pengjuntr
Abstract:
Obesity is on of serious health problem in many countries including Thailand where the prevalence of childhood obesity has increased from 8.8 % in 2014 to 9.5 % in 2015 and 12.9 % in 2016. The Ministry of Public Health’s objective is to reduce prevalence of childhood Obesity to 10% or lower in 2017, by implementing the measure in relation to nutrition, physical activity (PA) and environment in 6,405 targeted school with proportion of school children with obesity is higher than 10 %. Smart Kids Coacher (SKC)” is a new innovative intervention created by Department of Health and consists of 252 regional and provincial officers. The SKC aims to train the super trainers about food and nutrition.PA and emotional control through implementing three learning activities including 1) Food for Fun is about Nutrition flag, Nutrition label, food portion and Nutrition surveillance; 2) Fun for Fit includes intermediated- and advanced level workouts within 60 minutes such as kangaroo dance, Chair stretching; and 3) Control emotional is about to prevent probability of access to unhealthy food, to ensure for having meal in appropriate time, and to recruit peers and family member to increase awareness among target groups. Apart from providing SKC lesson for 3,828 officers at district level, a number of students (2,176) as role model are selected through implementing “Smart Kids Leader: (SKL)”.Consequently. The SKC lowers proportion of childhood obesity from 17% in 2012 to 12.9% in 2016. Further, the SKC coverage should be expanded to other setting. Policy maker should be aware of the important of reduction of the prevalence of childhood obesity, and it’s related risk. Network and Collaboration between stakeholders are essential as well as an improvement of holistic intervention and knowledge “NuPETHS” for kids in the future.Keywords: childhood obesity, model, obesity, smart kids coacher
Procedia PDF Downloads 245899 Roof and Road Network Detection through Object Oriented SVM Approach Using Low Density LiDAR and Optical Imagery in Misamis Oriental, Philippines
Authors: Jigg L. Pelayo, Ricardo G. Villar, Einstine M. Opiso
Abstract:
The advances of aerial laser scanning in the Philippines has open-up entire fields of research in remote sensing and machine vision aspire to provide accurate timely information for the government and the public. Rapid mapping of polygonal roads and roof boundaries is one of its utilization offering application to disaster risk reduction, mitigation and development. The study uses low density LiDAR data and high resolution aerial imagery through object-oriented approach considering the theoretical concept of data analysis subjected to machine learning algorithm in minimizing the constraints of feature extraction. Since separating one class from another in distinct regions of a multi-dimensional feature-space, non-trivial computing for fitting distribution were implemented to formulate the learned ideal hyperplane. Generating customized hybrid feature which were then used in improving the classifier findings. Supplemental algorithms for filtering and reshaping object features are develop in the rule set for enhancing the final product. Several advantages in terms of simplicity, applicability, and process transferability is noticeable in the methodology. The algorithm was tested in the different random locations of Misamis Oriental province in the Philippines demonstrating robust performance in the overall accuracy with greater than 89% and potential to semi-automation. The extracted results will become a vital requirement for decision makers, urban planners and even the commercial sector in various assessment processes.Keywords: feature extraction, machine learning, OBIA, remote sensing
Procedia PDF Downloads 363