Search results for: CR mesh network
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5167

Search results for: CR mesh network

937 Understanding Tactical Urbanisms in Derelict Areas

Authors: Berna Yaylalı, Isin Can Traunmüller

Abstract:

This paper explores the emergent bottom-up practices in the fields of architecture and urban design within comparative perspectives of two cities. As a temporary, easily affordable intervention that gives the possibility of transforming neglected spaces into vibrant public spaces, tactical urbanism, together with creative place-making strategies, presents alternative ways of creating sustainable developments in derelict and underused areas. This study examines the potential of social and physical developments through a reading of case studies of two creative spatial practices: a pop-up garden transformed from an unused derelict space in Favoriten, Vienna, and an urban community garden in Kuzguncuk, Istanbul. Two cities are chosen according to their multicultural population and diversity. Istanbul was selected as a design city by UNESCO Creative Cities Network in 2017, and Vienna was declared an open and livable city by its local government. This research will use media archives and reports, interviews with locals and local governments, site observations, and visual recordings as methods to provide a critical reading on creative public spaces from the view of local users in these neighborhoods. Reflecting on these emergent ways, this study aims at discussing the production process of tactile urbanism with the practices of locals and the decision-making process with cases from İstanbul and Vienna. The comparison between their place-making strategies in tactical urbanism will give important insights for future developments.

Keywords: creative city, tactical urbanism, neglected area, public space

Procedia PDF Downloads 103
936 Economics Analysis of Chinese Social Media Platform Sina Weibo and E-Commerce Platform Taobao

Authors: Xingyue Yang

Abstract:

This study focused on Chinese social media stars and the relationship between their level of fame on the social media platform Sina Weibo and their sales revenue on the E-commerce platform Taobao/Tmall.com. This was viewed from the perspective of Adler’s superstardom theory and Rosen and MacDonald’s theories examining the economics of celebrities who build their audience using digital, rather than traditional platforms. Theory and empirical research support the assertion that stars of traditional media achieve popular success due to a combination of talent and market concentration, as well as a range of other factors. These factors are also generally considered relevant to the popularisation of social media stars. However, success across digital media platforms also involves other variables - for example, upload strategies, cross-platform promotions, which often have no direct corollary in traditional media. These factors were the focus of our study, which investigated the relationship between popularity, promotional strategy and sales revenue for 15 social media stars who specialised in culinary topics on the Chinese social media platform Sina Weibo. In 2019, these food bloggers made a total of 2076 Sina Weibo posts, and these were compiled alongside calculations made to determine each food blogger’s sales revenue on the eCommerce platforms Taobao/Tmall. Quantitative analysis was then performed on this data, which determined that certain upload strategies on Weibo - such as upload time, posting format and length of video - have an important impact on the success of sales revenue on Taobao/Tmall.com.

Keywords: attention economics, digital media, network effect, social media stars

Procedia PDF Downloads 231
935 Bioinformatic Approaches in Population Genetics and Phylogenetic Studies

Authors: Masoud Sheidai

Abstract:

Biologists with a special field of population genetics and phylogeny have different research tasks such as populations’ genetic variability and divergence, species relatedness, the evolution of genetic and morphological characters, and identification of DNA SNPs with adaptive potential. To tackle these problems and reach a concise conclusion, they must use the proper and efficient statistical and bioinformatic methods as well as suitable genetic and morphological characteristics. In recent years application of different bioinformatic and statistical methods, which are based on various well-documented assumptions, are the proper analytical tools in the hands of researchers. The species delineation is usually carried out with the use of different clustering methods like K-means clustering based on proper distance measures according to the studied features of organisms. A well-defined species are assumed to be separated from the other taxa by molecular barcodes. The species relationships are studied by using molecular markers, which are analyzed by different analytical methods like multidimensional scaling (MDS) and principal coordinate analysis (PCoA). The species population structuring and genetic divergence are usually investigated by PCoA and PCA methods and a network diagram. These are based on bootstrapping of data. The Association of different genes and DNA sequences to ecological and geographical variables is determined by LFMM (Latent factor mixed model) and redundancy analysis (RDA), which are based on Bayesian and distance methods. Molecular and morphological differentiating characters in the studied species may be identified by linear discriminant analysis (DA) and discriminant analysis of principal components (DAPC). We shall illustrate these methods and related conclusions by giving examples from different edible and medicinal plant species.

Keywords: GWAS analysis, K-Means clustering, LFMM, multidimensional scaling, redundancy analysis

Procedia PDF Downloads 124
934 Infrastructure Sharing Synergies: Optimal Capacity Oversizing and Pricing

Authors: Robin Molinier

Abstract:

Industrial symbiosis (I.S) deals with both substitution synergies (exchange of waste materials, fatal energy and utilities as resources for production) and infrastructure/service sharing synergies. The latter is based on the intensification of use of an asset and thus requires to balance capital costs increments with snowball effects (network externalities) for its implementation. Initial investors must specify ex-ante arrangements (cost sharing and pricing schedule) to commit toward investments in capacities and transactions. Our model investigate the decision of 2 actors trying to choose cooperatively a level of infrastructure capacity oversizing to set a plug-and-play offer to a potential entrant whose capacity requirement is randomly distributed while satisficing their own requirements. Capacity cost exhibits sub-additive property so that there is room for profitable overcapacity setting in the first period. The entrant’s willingness-to-pay for the access to the infrastructure is dependent upon its standalone cost and the capacity gap that it must complete in case the available capacity is insufficient ex-post (the complement cost). Since initial capacity choices are driven by ex-ante (expected) yield extractible from the entrant we derive the expected complement cost function which helps us defining the investors’ objective function. We first show that this curve is decreasing and convex in the capacity increments and that it is shaped by the distribution function of the potential entrant’s requirements. We then derive the general form of solutions and solve the model for uniform and triangular distributions. Depending on requirements volumes and cost assumptions different equilibria occurs. We finally analyze the effect of a per-unit subsidy a public actor would apply to foster such sharing synergies.

Keywords: capacity, cooperation, industrial symbiosis, pricing

Procedia PDF Downloads 211
933 Immune Complex Components Act as Agents in Relapsing Fever Borrelia Mediated Rosette Formation

Authors: Mukunda Upreti, Jill Storry, Rafael Björk, Emilie Louvet, Johan Normark, Sven Bergström

Abstract:

Borrelia duttonii and most other relapsing fever species are Gram-negative bacteria which cause a blood borne infection characterized by the binding of bacterium to erythrocytes. The bacteria associate with two or more erythrocytes to form clusters of cells into rosettes. Rosetting is a major virulence factor and the mechanism is believed to facilitate persistence of bacteria in the circulatory system and the avoidance of host immune cells through masking or steric hindrance effects. However, the molecular mechanisms of rosette formation are still poorly understood. This study aims at determining the molecules involved in the rosette formation phenomenon. Fractionated serum, using different affinity purification methods, was investigated as a rosetting agent and IgG and at least one other serum components were needed for rosettes to form. An IgG titration curve demonstrated that IgG alone is not enough to restore rosette formation level to the level whole serum gives. IgG hydrolysis by IdeS ( Immunoglobulin G-degrading enzyme of Streptococcus pyogenes) and deglycosylation using N-Glycanase proved that the whole IgG molecule regardless of saccharide moieties is critical for Borrelia induced rosetting. Complement components C3 and C4 were also important serum molecules necessary to maintain optimum rosetting rates. The deactivation of complement network and serum depletion with C3 and C4 significantly reduced the rosette formation rate. The dependency of IgG and complement components also implied involvement of the complement receptor (CR1). Rosette formation test with Knops null RBC and sCR1 confirmed that CR1 is also part of Borrelia induced rosette formation.

Keywords: complement components C3 and C4, complement receptor 1, Immunoglobulin G, Knops null, Rosetting

Procedia PDF Downloads 321
932 Aggregation of Electric Vehicles for Emergency Frequency Regulation of Two-Area Interconnected Grid

Authors: S. Agheb, G. Ledwich, G.Walker, Z.Tong

Abstract:

Frequency control has become more of concern for reliable operation of interconnected power systems due to the integration of low inertia renewable energy sources to the grid and their volatility. Also, in case of a sudden fault, the system has less time to recover before widespread blackouts. Electric Vehicles (EV)s have the potential to cooperate in the Emergency Frequency Regulation (EFR) by a nonlinear control of the power system in case of large disturbances. The time is not adequate to communicate with each individual EV on emergency cases, and thus, an aggregate model is necessary for a quick response to prevent from much frequency deviation and the occurrence of any blackout. In this work, an aggregate of EVs is modelled as a big virtual battery in each area considering various aspects of uncertainty such as the number of connected EVs and their initial State of Charge (SOC) as stochastic variables. A control law was proposed and applied to the aggregate model using Lyapunov energy function to maximize the rate of reduction of total kinetic energy in a two-area network after the occurrence of a fault. The control methods are primarily based on the charging/ discharging control of available EVs as shunt capacity in the distribution system. Three different cases were studied considering the locational aspect of the model with the virtual EV either in the center of the two areas or in the corners. The simulation results showed that EVs could help the generator lose its kinetic energy in a short time after a contingency. Earlier estimation of possible contributions of EVs can help the supervisory control level to transmit a prompt control signal to the subsystems such as the aggregator agents and the grid. Thus, the percentage of EVs contribution for EFR will be characterized in the future as the goal of this study.

Keywords: emergency frequency regulation, electric vehicle, EV, aggregation, Lyapunov energy function

Procedia PDF Downloads 100
931 Nonlinear Estimation Model for Rail Track Deterioration

Authors: M. Karimpour, L. Hitihamillage, N. Elkhoury, S. Moridpour, R. Hesami

Abstract:

Rail transport authorities around the world have been facing a significant challenge when predicting rail infrastructure maintenance work for a long period of time. Generally, maintenance monitoring and prediction is conducted manually. With the restrictions in economy, the rail transport authorities are in pursuit of improved modern methods, which can provide precise prediction of rail maintenance time and location. The expectation from such a method is to develop models to minimize the human error that is strongly related to manual prediction. Such models will help them in understanding how the track degradation occurs overtime under the change in different conditions (e.g. rail load, rail type, rail profile). They need a well-structured technique to identify the precise time that rail tracks fail in order to minimize the maintenance cost/time and secure the vehicles. The rail track characteristics that have been collected over the years will be used in developing rail track degradation prediction models. Since these data have been collected in large volumes and the data collection is done both electronically and manually, it is possible to have some errors. Sometimes these errors make it impossible to use them in prediction model development. This is one of the major drawbacks in rail track degradation prediction. An accurate model can play a key role in the estimation of the long-term behavior of rail tracks. Accurate models increase the track safety and decrease the cost of maintenance in long term. In this research, a short review of rail track degradation prediction models has been discussed before estimating rail track degradation for the curve sections of Melbourne tram track system using Adaptive Network-based Fuzzy Inference System (ANFIS) model.

Keywords: ANFIS, MGT, prediction modeling, rail track degradation

Procedia PDF Downloads 335
930 A Challenge to Acquire Serious Victims’ Locations during Acute Period of Giant Disasters

Authors: Keiko Shimazu, Yasuhiro Maida, Tetsuya Sugata, Daisuke Tamakoshi, Kenji Makabe, Haruki Suzuki

Abstract:

In this paper, we report how to acquire serious victims’ locations in the Acute Stage of Large-scale Disasters, in an Emergency Information Network System designed by us. The background of our concept is based on the Great East Japan Earthquake occurred on March 11th, 2011. Through many experiences of national crises caused by earthquakes and tsunamis, we have established advanced communication systems and advanced disaster medical response systems. However, Japan was devastated by huge tsunamis swept a vast area of Tohoku causing a complete breakdown of all the infrastructures including telecommunications. Therefore, we noticed that we need interdisciplinary collaboration between science of disaster medicine, regional administrative sociology, satellite communication technology and systems engineering experts. Communication of emergency information was limited causing a serious delay in the initial rescue and medical operation. For the emergency rescue and medical operations, the most important thing is to identify the number of casualties, their locations and status and to dispatch doctors and rescue workers from multiple organizations. In the case of the Tohoku earthquake, the dispatching mechanism and/or decision support system did not exist to allocate the appropriate number of doctors and locate disaster victims. Even though the doctors and rescue workers from multiple government organizations have their own dedicated communication system, the systems are not interoperable.

Keywords: crisis management, disaster mitigation, messing, MGRS, military grid reference system, satellite communication system

Procedia PDF Downloads 236
929 Implementation of an Image Processing System Using Artificial Intelligence for the Diagnosis of Malaria Disease

Authors: Mohammed Bnebaghdad, Feriel Betouche, Malika Semmani

Abstract:

Image processing become more sophisticated over time due to technological advances, especially artificial intelligence (AI) technology. Currently, AI image processing is used in many areas, including surveillance, industry, science, and medicine. AI in medical image processing can help doctors diagnose diseases faster, with minimal mistakes, and with less effort. Among these diseases is malaria, which remains a major public health challenge in many parts of the world. It affects millions of people every year, particularly in tropical and subtropical regions. Early detection of malaria is essential to prevent serious complications and reduce the burden of the disease. In this paper, we propose and implement a scheme based on AI image processing to enhance malaria disease diagnosis through automated analysis of blood smear images. The scheme is based on the convolutional neural network (CNN) method. So, we have developed a model that classifies infected and uninfected single red cells using images available on Kaggle, as well as real blood smear images obtained from the Central Laboratory of Medical Biology EHS Laadi Flici (formerly El Kettar) in Algeria. The real images were segmented into individual cells using the watershed algorithm in order to match the images from the Kaagle dataset. The model was trained and tested, achieving an accuracy of 99% and 97% accuracy for new real images. This validates that the model performs well with new real images, although with slightly lower accuracy. Additionally, the model has been embedded in a Raspberry Pi4, and a graphical user interface (GUI) was developed to visualize the malaria diagnostic results and facilitate user interaction.

Keywords: medical image processing, malaria parasite, classification, CNN, artificial intelligence

Procedia PDF Downloads 19
928 Future Sustainable Mobility for Colorado

Authors: Paolo Grazioli

Abstract:

In this paper, we present the main results achieved during an eight-week international design project on Colorado Future Sustainable Mobilitycarried out at Metropolitan State University of Denver. The project was born with the intention to seize the opportunity created by the Colorado government’s plan to promote e-bikes mobility by creating a large network of dedicated tracks. The project was supported by local entrepreneurs who offered financial and professional support. The main goal of the project was to engage design students with the skills to design a user-centered, original vehicle that would satisfy the unarticulated practical and emotional needs of “Gen Z” users by creating a fun, useful, and reliablelife companion that would helps users carry out their everyday tasks in a practical and enjoyable way. The project was carried out with the intention of proving the importance of the combination of creative methods with practical design methodologies towards the creation of an innovative yet immediately manufacturable product for a more sustainable future. The final results demonstrate the students' capability to create innovative and yet manufacturable products and, especially, their ability to create a new design paradigm for future sustainable mobility products. The design solutions explored n the project include collaborative learning and human-interaction design for future mobility. The findings of the research led students to the fabrication of two working prototypes that will be tested in Colorado and developed for manufacturing in the year 2024. The project showed that collaborative design and project-based teaching improve the quality of the outcome and can lead to the creation of real life, innovative products directly from the classroom to the market.

Keywords: sustainable transportation design, interface design, collaborative design, user -centered design research, design prototyping

Procedia PDF Downloads 96
927 Prediction of Remaining Life of Industrial Cutting Tools with Deep Learning-Assisted Image Processing Techniques

Authors: Gizem Eser Erdek

Abstract:

This study is research on predicting the remaining life of industrial cutting tools used in the industrial production process with deep learning methods. When the life of cutting tools decreases, they cause destruction to the raw material they are processing. This study it is aimed to predict the remaining life of the cutting tool based on the damage caused by the cutting tools to the raw material. For this, hole photos were collected from the hole-drilling machine for 8 months. Photos were labeled in 5 classes according to hole quality. In this way, the problem was transformed into a classification problem. Using the prepared data set, a model was created with convolutional neural networks, which is a deep learning method. In addition, VGGNet and ResNet architectures, which have been successful in the literature, have been tested on the data set. A hybrid model using convolutional neural networks and support vector machines is also used for comparison. When all models are compared, it has been determined that the model in which convolutional neural networks are used gives successful results of a %74 accuracy rate. In the preliminary studies, the data set was arranged to include only the best and worst classes, and the study gave ~93% accuracy when the binary classification model was applied. The results of this study showed that the remaining life of the cutting tools could be predicted by deep learning methods based on the damage to the raw material. Experiments have proven that deep learning methods can be used as an alternative for cutting tool life estimation.

Keywords: classification, convolutional neural network, deep learning, remaining life of industrial cutting tools, ResNet, support vector machine, VggNet

Procedia PDF Downloads 77
926 A Review on the Hydrologic and Hydraulic Performances in Low Impact Development-Best Management Practices Treatment Train

Authors: Fatin Khalida Abdul Khadir, Husna Takaijudin

Abstract:

Bioretention system is one of the alternatives to approach the conventional stormwater management, low impact development (LID) strategy for best management practices (BMPs). Incorporating both filtration and infiltration, initial research on bioretention systems has shown that this practice extensively decreases runoff volumes and peak flows. The LID-BMP treatment train is one of the latest LID-BMPs for stormwater treatments in urbanized watersheds. The treatment train is developed to overcome the drawbacks that arise from conventional LID-BMPs and aims to enhance the performance of the existing practices. In addition, it is also used to improve treatments in both water quality and water quantity controls as well as maintaining the natural hydrology of an area despite the current massive developments. The objective of this paper is to review the effectiveness of the conventional LID-BMPS on hydrologic and hydraulic performances through column studies in different configurations. The previous studies on the applications of LID-BMP treatment train that were developed to overcome the drawbacks of conventional LID-BMPs are reviewed and use as the guidelines for implementing this system in Universiti Teknologi Petronas (UTP) and elsewhere. The reviews on the analysis conducted for hydrologic and hydraulic performances using the artificial neural network (ANN) model are done in order to be utilized in this study. In this study, the role of the LID-BMP treatment train is tested by arranging bioretention cells in series in order to be implemented for controlling floods that occurred currently and in the future when the construction of the new buildings in UTP completed. A summary of the research findings on the performances of the system is provided which includes the proposed modifications on the designs.

Keywords: bioretention system, LID-BMP treatment train, hydrological and hydraulic performance, ANN analysis

Procedia PDF Downloads 118
925 Crossing Multi-Source Climate Data to Estimate the Effects of Climate Change on Evapotranspiration Data: Application to the French Central Region

Authors: Bensaid A., Mostephaoui T., Nedjai R.

Abstract:

Climatic factors are the subject of considerable research, both methodologically and instrumentally. Under the effect of climate change, the approach to climate parameters with precision remains one of the main objectives of the scientific community. This is from the perspective of assessing climate change and its repercussions on humans and the environment. However, many regions of the world suffer from a severe lack of reliable instruments that can make up for this deficit. Alternatively, the use of empirical methods becomes the only way to assess certain parameters that can act as climate indicators. Several scientific methods are used for the evaluation of evapotranspiration which leads to its evaluation either directly at the level of the climatic stations or by empirical methods. All these methods make a point approach and, in no case, allow the spatial variation of this parameter. We, therefore, propose in this paper the use of three sources of information (network of weather stations of Meteo France, World Databases, and Moodis satellite images) to evaluate spatial evapotranspiration (ETP) using the Turc method. This first step will reflect the degree of relevance of the indirect (satellite) methods and their generalization to sites without stations. The spatial variation representation of this parameter using the geographical information system (GIS) accounts for the heterogeneity of the behaviour of this parameter. This heterogeneity is due to the influence of site morphological factors and will make it possible to appreciate the role of certain topographic and hydrological parameters. A phase of predicting the evolution over the medium and long term of evapotranspiration under the effect of climate change by the application of the Intergovernmental Panel on Climate Change (IPCC) scenarios gives a realistic overview as to the contribution of aquatic systems to the scale of the region.

Keywords: climate change, ETP, MODIS, GIEC scenarios

Procedia PDF Downloads 100
924 Analyzing Environmental Emotive Triggers in Terrorist Propaganda

Authors: Travis Morris

Abstract:

The purpose of this study is to measure the intersection of environmental security entities in terrorist propaganda. To the best of author’s knowledge, this is the first study of its kind to examine this intersection within terrorist propaganda. Rosoka, natural language processing software and frame analysis are used to advance our understanding of how environmental frames function as emotive triggers. Violent jihadi demagogues use frames to suggest violent and non-violent solutions to their grievances. Emotive triggers are framed in a way to leverage individual and collective attitudes in psychological warfare. A comparative research design is used because of the differences and similarities that exist between two variants of violent jihadi propaganda that target western audiences. Analysis is based on salience and network text analysis, which generates violent jihadi semantic networks. Findings indicate that environmental frames are used as emotive triggers across both data sets, but also as tactical and information data points. A significant finding is that certain core environmental emotive triggers like “water,” “soil,” and “trees” are significantly salient at the aggregate level across both data sets. All environmental entities can be classified into two categories, symbolic and literal. Importantly, this research illustrates how demagogues use environmental emotive triggers in cyber space from a subcultural perspective to mobilize target audiences to their ideology and praxis. Understanding the anatomy of propaganda construction is necessary in order to generate effective counter narratives in information operations. This research advances an additional method to inform practitioners and policy makers of how environmental security and propaganda intersect.

Keywords: propaganda analysis, emotive triggers environmental security, frames

Procedia PDF Downloads 138
923 Internal Evaluation of Architecture University Department in Architecture Engineering Bachelor's Level: A Case from Iran

Authors: Faranak Omidian

Abstract:

This study has been carried out to examine the status of architecture department at bachelor's level of engineering architecture in Islamic Azad University of Dezful in 2012-13 academic year. The present research is a descriptive cross sectional study and in terms of measurement, it is descriptive and analytical, which was done based on 7 steps and in 7 areas with 32 criteria and 169 indicators. The sample includes 201 students, 14 faculty members, 72 graduates and 39 employers. Simple random sampling method, complete enumeration method, network sampling (snowball sampling) were used for students, faculty members and graduates respectively. All sample responded to the questions. After data collection, the findings were ranked on Likert scale from desirable to undesirable with the scores ranging from 1 to 3.The results showed that the department with a score of 1.88 in regard to objectives, organizational status, management and organizations, with a score of 2 in relation to students, with a score of 1.8 in regard to faculty members was in a relatively desirable status. Regarding training courses and curriculum, it gained a score of 2.33 which indicates the desirable status of the department in this regard. It gained scores of 1.75, 2, and 1.8 with respect to educational and research facilities and equipment, teaching and learning strategies, and graduates respectively, all of which shows the relatively desirable status of the department. The results showed that the department of architecture, with an average score of 2.14 in all evaluated areas, was in a desirable situation. Therefore, although the department generally has a desirable status, it needs to put in more effort to tackle its weaknesses and shortages and corrects its defects in order to promote educational quality, taking to the desirable level.

Keywords: internal evaluation, architecture department in Islamic, Azad University, Dezful

Procedia PDF Downloads 444
922 Impacts of Hydrologic and Topographic Changes on Water Regime Evolution of Poyang Lake, China

Authors: Feng Huang, Carlos G. Ochoa, Haitao Zhao

Abstract:

Poyang Lake, the largest freshwater lake in China, is located at the middle-lower reaches of the Yangtze River basin. It has great value in socioeconomic development and is internationally recognized as an important lacustrine and wetland ecosystem with abundant biodiversity. Impacted by ongoing climate change and anthropogenic activities, especially the regulation of the Three Gorges Reservoir since 2003, Poyang Lake has experienced significant water regime evolution, resulting in challenges for the management of water resources and the environment. Quantifying the contribution of hydrologic and topographic changes to water regime alteration is necessary for policymakers to design effective adaption strategies. Long term hydrologic data were collected and the back-propagation neural networks were constructed to simulate the lake water level. The impacts of hydrologic and topographic changes were differentiated through scenario analysis that considered pre-impact and post-impact hydrologic and topographic scenarios. The lake water regime was characterized by hydrologic indicators that describe monthly water level fluctuations, hydrologic features during flood and drought seasons, and frequency and rate of hydrologic variations. The results revealed different contributions of hydrologic and topographic changes to different features of the lake water regime.Noticeable changes were that the water level declined dramatically during the period of reservoir impoundment, and the drought was enhanced during the dry season. The hydrologic and topographic changes exerted a synergistic effect or antagonistic effect on different lake water regime features. The findings provide scientific reference for lacustrine and wetland ecological protection associated with water regime alterations.

Keywords: back-propagation neural network, scenario analysis, water regime, Poyang Lake

Procedia PDF Downloads 139
921 Seismological Studies in Some Areas in Egypt

Authors: Gamal Seliem, Hassan Seliem

Abstract:

Aswan area is one of the important areas in Egypt and because it encompasses the vital engineering structure of the High dam, so it has been selected for the present study. The study of the crustal deformation and gravity associated with earthquake activity in the High Dam area of great importance for the safety of the High Dam and its economic resources. This paper deals with using micro-gravity, precise leveling and GPS data for geophysical and geodetically studies. For carrying out the detailed gravity survey in the area, were established for studying the subsurface structures. To study the recent vertical movements, a profile of 10 km length joins the High Dam and Aswan old dam were established along the road connecting the two dams. This profile consists of 35 GPS/leveling stations extending along the two sides of the road and on the High Dam body. Precise leveling was carried out with GPS and repeated micro-gravity survey in the same time. GPS network consisting of nine stations was established for studying the recent crustal movements. Many campaigns from December 2001 to December 2014 were performed for collecting the gravity, leveling and GPS data. The main aim of this work is to study the structural features and the behavior of the area, as depicted from repeated micro-gravity, precise leveling and GPS measurements. The present work focuses on the analysis of the gravity, leveling and GPS data. The gravity results of the present study investigate and analyze the subsurface geologic structures and reveal to there be minor structures; features and anomalies are taking W-E and N-S directions. The geodetic results indicated lower rates of the vertical and horizontal displacements and strain values. This may be related to the stability of the area.

Keywords: repeated micro-gravity changes, precise leveling, GPS data, Aswan High Dam

Procedia PDF Downloads 448
920 Ultra-Reliable Low Latency V2X Communication for Express Way Using Multiuser Scheduling Algorithm

Authors: Vaishali D. Khairnar

Abstract:

The main aim is to provide lower-latency and highly reliable communication facilities for vehicles in the automobile industry; vehicle-to-everything (V2X) communication basically intends to increase expressway road security and its effectiveness. The Ultra-Reliable Low-Latency Communications (URLLC) algorithm and cellular networks are applied in combination with Mobile Broadband (MBB). This is particularly used in express way safety-based driving applications. Expressway vehicle drivers (humans) will communicate in V2X systems using the sixth-generation (6G) communication systems which have very high-speed mobility features. As a result, we need to determine how to ensure reliable and consistent wireless communication links and improve the quality to increase channel gain, which is becoming a challenge that needs to be addressed. To overcome this challenge, we proposed a unique multi-user scheduling algorithm for ultra-massive multiple-input multiple-output (MIMO) systems using 6G. In wideband wireless network access in case of high traffic and also in medium traffic conditions, moreover offering quality-of-service (QoS) to distinct service groups with synchronized contemporaneous traffic on the highway like the Mumbai-Pune expressway becomes a critical problem. Opportunist MAC (OMAC) is a way of proposing communication across a wireless communication link that can change in space and time and might overcome the above-mentioned challenge. Therefore, a multi-user scheduling algorithm is proposed for MIMO systems using a cross-layered MAC protocol to achieve URLLC and high reliability in V2X communication.

Keywords: ultra-reliable low latency communications, vehicle-to-everything communication, multiple-input multiple-output systems, multi-user scheduling algorithm

Procedia PDF Downloads 88
919 Use of Geosynthetics as Reinforcement Elements in Unpaved Tertiary Roads

Authors: Vivian A. Galindo, Maria C. Galvis, Jaime R. Obando, Alvaro Guarin

Abstract:

In Colombia, most of the roads of the national tertiary road network are unpaved roads with granular rolling surface. These are very important ways of guaranteeing the mobility of people, products, and inputs from the agricultural sector from the most remote areas to urban centers; however, it has not paid much attention to the search for alternatives to avoid the occurrence of deteriorations that occur shortly after its commissioning. In recent years, geosynthetics have been used satisfactorily to reinforce unpaved roads on soft soils, with geotextiles and geogrids being the most widely used. The interaction of the geogrid and the aggregate minimizes the lateral movement of the aggregate particles and increases the load capacity of the material, which leads to a better distribution of the vertical stresses, consequently reducing the vertical deformations in the subgrade. Taking into account the above, the research aimed at the mechanical behavior of the granular material, used in unpaved roads with and without the presence of geogrids, from the development of laboratory tests through the loaded wheel tester (LWT). For comparison purposes, the reinforced conditions and traffic conditions to which this type of material can be accessed in practice were simulated. In total four types of geogrids, were tested with granular material; this means that five test sets, the reinforced material and the non-reinforced control sample were evaluated. The results of the numbers of load cycles and depth rutting supported by each test body showed the influence of the properties of the reinforcement on the mechanical behavior of the assembly and the significant increases in the number of load cycles of the reinforced specimens in relation to those without reinforcement.

Keywords: geosynthetics, load wheel tester LWT, tertiary roads, unpaved road, vertical deformation

Procedia PDF Downloads 250
918 Efficiency of a Molecularly Imprinted Polymer for Selective Removal of Chlorpyrifos from Water Samples

Authors: Oya A. Urucu, Aslı B. Çiğil, Hatice Birtane, Ece K. Yetimoğlu, Memet Vezir Kahraman

Abstract:

Chlorpyrifos is an organophosphorus pesticide which can be found in environmental water samples. The efficiency and reuse of a molecularly imprinted polymer (chlorpyrifos - MIP) were investigated for the selective removal of chlorpyrifos residues. MIP was prepared with UV curing thiol-ene polymerization technology by using multifunctional thiol and ene monomers. The thiol-ene curing reaction is a radical induced process, however unlike other photoinitiated polymerization processes, this polymerization process is a free-radical reaction that proceeds by a step-growth mechanism, involving two main steps; a free-radical addition followed by a chain transfer reaction. It assures a very rapidly formation of a uniform crosslinked network with low shrinkage, reduced oxygen inhibition during curing and excellent adhesion. In this study, thiol-ene based UV-curable polymeric materials were prepared by mixing pentaerythritol tetrakis(3-mercaptopropionate), glyoxal bis diallyl acetal, polyethylene glycol diacrylate (PEGDA) and photoinitiator. Chlorpyrifos was added at a definite ratio to the prepared formulation. Chemical structure and thermal properties were characterized by FTIR and thermogravimetric analysis (TGA), respectively. The pesticide analysis was performed by gas chromatography-mass spectrometry (GC-MS). The influences of some analytical parameters such as pH, sample volume, amounts of analyte concentration were studied for the quantitative recoveries of the analyte. The proposed MIP method was applied to the determination of chlorpyrifos in river and tap water samples. The use of the MIP provided a selective and easy solution for removing chlorpyrifos from the water.

Keywords: molecularly imprinted polymers, selective removal, thilol-ene, uv-curable polymer

Procedia PDF Downloads 301
917 The Event of Extreme Precipitation Occurred in the Metropolitan Mesoregion of the Capital of Para

Authors: Natasha Correa Vitória Bandeira, Lais Cordeiro Soares, Claudineia Brazil, Luciane Teresa Salvi

Abstract:

The intense rain event that occurred between February 16 and 18, 2018, in the city of Barcarena in Pará, located in the North region of Brazil, demonstrates the importance of analyzing this type of event. The metropolitan mesoregion of Belem was severely punished by rains much above the averages normally expected for that time of year; this phenomenon affected, in addition to the capital, the municipalities of Barcarena, Murucupi and Muruçambá. Resulting in a great flood in the rivers of the region, whose basins were affected with great intensity of precipitation, causing concern for the local population because in this region, there are located companies that accumulate ore tailings, and in this specific case, the dam of any of these companies, leaching the ore to the water bodies of the Murucupi River Basin. This article aims to characterize this phenomenon through a special analysis of the distribution of rainfall, using data from atmospheric soundings, satellite images, radar images and data from the GPCP (Global Precipitation Climatology Project), in addition to rainfall stations located in the study region. The results of the work demonstrated a dissociation between the data measured in the meteorological stations and the other forms of analysis of this extreme event. Monitoring carried out solely on the basis of data from pluviometric stations is not sufficient for monitoring and/or diagnosing extreme weather events, and investment by the competent bodies is important to install a larger network of pluviometric stations sufficient to meet the demand in a given region.

Keywords: extreme precipitation, great flood, GPCP, ore dam

Procedia PDF Downloads 107
916 Cybersecurity Challenges in the Era of Open Banking

Authors: Krish Batra

Abstract:

The advent of open banking has revolutionized the financial services industry by fostering innovation, enhancing customer experience, and promoting competition. However, this paradigm shift towards more open and interconnected banking ecosystems has introduced complex cybersecurity challenges. This research paper delves into the multifaceted cybersecurity landscape of open banking, highlighting the vulnerabilities and threats inherent in sharing financial data across a network of banks and third-party providers. Through a detailed analysis of recent data breaches, phishing attacks, and other cyber incidents, the paper assesses the current state of cybersecurity within the open banking framework. It examines the effectiveness of existing security measures, such as encryption, API security protocols, and authentication mechanisms, in protecting sensitive financial information. Furthermore, the paper explores the regulatory response to these challenges, including the implementation of standards such as PSD2 in Europe and similar initiatives globally. By identifying gaps in current cybersecurity practices, the research aims to propose a set of robust, forward-looking strategies that can enhance the security and resilience of open banking systems. This includes recommendations for banks, third-party providers, regulators, and consumers on how to mitigate risks and ensure a secure open banking environment. The ultimate goal is to provide stakeholders with a comprehensive understanding of the cybersecurity implications of open banking and to outline actionable steps for safeguarding the financial ecosystem in an increasingly interconnected world.

Keywords: open banking, financial services industry, cybersecurity challenges, data breaches, phishing attacks, encryption, API security protocols, authentication mechanisms, regulatory response, PSD2, cybersecurity practices

Procedia PDF Downloads 60
915 Signs, Signals and Syndromes: Algorithmic Surveillance and Global Health Security in the 21st Century

Authors: Stephen L. Roberts

Abstract:

This article offers a critical analysis of the rise of syndromic surveillance systems for the advanced detection of pandemic threats within contemporary global health security frameworks. The article traces the iterative evolution and ascendancy of three such novel syndromic surveillance systems for the strengthening of health security initiatives over the past two decades: 1) The Program for Monitoring Emerging Diseases (ProMED-mail); 2) The Global Public Health Intelligence Network (GPHIN); and 3) HealthMap. This article demonstrates how each newly introduced syndromic surveillance system has become increasingly oriented towards the integration of digital algorithms into core surveillance capacities to continually harness and forecast upon infinitely generating sets of digital, open-source data, potentially indicative of forthcoming pandemic threats. This article argues that the increased centrality of the algorithm within these next-generation syndromic surveillance systems produces a new and distinct form of infectious disease surveillance for the governing of emergent pathogenic contingencies. Conceptually, the article also shows how the rise of this algorithmic mode of infectious disease surveillance produces divergences in the governmental rationalities of global health security, leading to the rise of an algorithmic governmentality within contemporary contexts of Big Data and these surveillance systems. Empirically, this article demonstrates how this new form of algorithmic infectious disease surveillance has been rapidly integrated into diplomatic, legal, and political frameworks to strengthen the practice of global health security – producing subtle, yet distinct shifts in the outbreak notification and reporting transparency of states, increasingly scrutinized by the algorithmic gaze of syndromic surveillance.

Keywords: algorithms, global health, pandemic, surveillance

Procedia PDF Downloads 185
914 Supervisory Controller with Three-State Energy Saving Mode for Induction Motor in Fluid Transportation

Authors: O. S. Ebrahim, K. O. Shawky, M. O. S. Ebrahim, P. K. Jain

Abstract:

Induction Motor (IM) driving pump is the main consumer of electricity in a typical fluid transportation system (FTS). It was illustrated that changing the connection of the stator windings from delta to star at no load could achieve noticeable active and reactive energy savings. This paper proposes a supervisory hysteresis liquid-level control with three-state energy saving mode (ESM) for IM in FTS including storage tank. The IM pump drive comprises modified star/delta switch and hydromantic coupler. Three-state ESM is defined, along with the normal running, and named analog to computer ESMs as follows: Sleeping mode in which the motor runs at no load with delta stator connection, hibernate mode in which the motor runs at no load with a star connection, and motor shutdown is the third energy saver mode. A logic flow-chart is synthesized to select the motor state at no-load for best energetic cost reduction, considering the motor thermal capacity used. An artificial neural network (ANN) state estimator, based on the recurrent architecture, is constructed and learned in order to provide fault-tolerant capability for the supervisory controller. Sequential test of Wald is used for sensor fault detection. Theoretical analysis, preliminary experimental testing and, computer simulations are performed to show the effectiveness of the proposed control in terms of reliability, power quality and energy/coenergy cost reduction with the suggestion of power factor correction.

Keywords: ANN, ESM, IM, star/delta switch, supervisory control, FT, reliability, power quality

Procedia PDF Downloads 193
913 Numerical and Experimental Investigation of Fracture Mechanism in Paintings on Wood

Authors: Mohammad Jamalabadi, Noemi Zabari, Lukasz Bratasz

Abstract:

Panel paintings -complex multi-layer structures consisting of wood support and a paint layer composed of a preparatory layer of gesso, paints, and varnishes- are among the category of cultural objects most vulnerable to relative humidity fluctuations and frequently found in museum collections. The current environmental specifications in museums have been derived using the criterion of crack initiation in an undamaged, usually new gesso layer laid on wood. In reality, historical paintings exhibit complex crack patterns called craquelures. The present paper analyses the structural response of a paint layer with a virtual network of rectangular cracks under environmental loadings using a three-dimensional model of a panel painting. Two modes of loading are considered -one induced by one-dimensional moisture response of wood support, termed the tangential loading, and the other isotropic induced by drying shrinkage of the gesso layer. The superposition of the two modes is also analysed. The modelling showed that minimum distances between cracks parallel to the wood grain depended on the gesso stiffness under the tangential loading. In spite of a non-zero Poisson’s ratio, gesso cracks perpendicular to the wood grain could not be generated by the moisture response of wood support. The isotropic drying shrinkage of gesso produced cracks that were almost evenly spaced in both directions. The modelling results were cross-checked with crack patterns obtained on a mock-up of a panel painting exposed to a number of extreme environmental variations in an environmental chamber.

Keywords: fracture saturation, surface cracking, paintings on wood, wood panels

Procedia PDF Downloads 267
912 The Integration Challenges of Women Refugees in Sweden from Socio-Cultural Perspective

Authors: Khadijah Saeed Khan

Abstract:

One of the major current societal issues of Swedish society is to integrate newcomer refugees well into the host society. The cultural integration issue is one of the under debated topic in the literature, and this study intends to meet this gap from the Swedish perspective. The purpose of this study is to explore the role and types of cultural landscapes of refugee women in Sweden and how these landscapes help or hinder the settlement process. The cultural landscapes are referred to as a set of multiple cultural activities or practices which refugees perform in a specific context and circumstances (i.e., being in a new country) to seek, share or use relevant information for their settlement. Information plays a vital role in various aspects of newcomers' lives in a new country. This article has an intention to highlight the importance of multiple cultural landscapes as a source of information (regarding employment, language learning, finding accommodation, immigration matters, health concerns, school and education, family matters, and other everyday matters) for refugees to settle down in Sweden. Some relevant theories, such as information landscapes and socio-cultural theories, are considered in this study. A qualitative research design is employed, including semi-structured deep interviews and participatory observation with 20 participants. The initial findings show that the refugee women encounter many information-related and integration-related challenges in Sweden and have built a network of cultural landscapes in which they practice various co-ethnic cultural and religious activities at different times of the year. These landscapes help them to build a sense of belonging with people from their own or similar land and assist them to seek and share relevant information in everyday life in Sweden.

Keywords: cultural integration, cultural landscapes, information, women refugees

Procedia PDF Downloads 142
911 Optimum Dewatering Network Design Using Firefly Optimization Algorithm

Authors: S. M. Javad Davoodi, Mojtaba Shourian

Abstract:

Groundwater table close to the ground surface causes major problems in construction and mining operation. One of the methods to control groundwater in such cases is using pumping wells. These pumping wells remove excess water from the site project and lower the water table to a desirable value. Although the efficiency of this method is acceptable, it needs high expenses to apply. It means even small improvement in a design of pumping wells can lead to substantial cost savings. In order to minimize the total cost in the method of pumping wells, a simulation-optimization approach is applied. The proposed model integrates MODFLOW as the simulation model with Firefly as the optimization algorithm. In fact, MODFLOW computes the drawdown due to pumping in an aquifer and the Firefly algorithm defines the optimum value of design parameters which are numbers, pumping rates and layout of the designing wells. The developed Firefly-MODFLOW model is applied to minimize the cost of the dewatering project for the ancient mosque of Kerman city in Iran. Repetitive runs of the Firefly-MODFLOW model indicates that drilling two wells with the total rate of pumping 5503 m3/day is the result of the minimization problem. Results show that implementing the proposed solution leads to at least 1.5 m drawdown in the aquifer beneath mosque region. Also, the subsidence due to groundwater depletion is less than 80 mm. Sensitivity analyses indicate that desirable groundwater depletion has an enormous impact on total cost of the project. Besides, in a hypothetical aquifer decreasing the hydraulic conductivity contributes to decrease in total water extraction for dewatering.

Keywords: groundwater dewatering, pumping wells, simulation-optimization, MODFLOW, firefly algorithm

Procedia PDF Downloads 294
910 Fabrication and Characterization Analysis of La-Sr-Co-Fe-O Perovskite Hollow Fiber Catalyst for Oxygen Removal in Landfill Gas

Authors: Seong Woon Lee, Soo Min Lim, Sung Sik Jeong, Jung Hoon Park

Abstract:

The atmospheric concentration of greenhouse gas (GHG, Green House Gas) is increasing continuously as a result of the combustion of fossil fuels and industrial development. In response to this trend, many researches have been conducted on the reduction of GHG. Landfill gas (LFG, Land Fill Gas) is one of largest sources of GHG emissions containing the methane (CH₄) as a major constituent and can be considered renewable energy sources as well. In order to use LFG by connecting to the city pipe network, it required a process for removing impurities. In particular, oxygen must be removed because it can cause corrosion of pipes and engines. In this study, methane oxidation was used to eliminate oxygen from LFG and perovskite-type ceramic catalysts of La-Sr-Co-Fe-O composition was selected as a catalyst. Hollow fiber catalysts (HFC, Hollow Fiber Catalysts) have attracted attention as a new concept alternative because they have high specific surface area and mechanical strength compared to other types of catalysts. HFC was prepared by a phase-inversion/sintering technique using commercial La-Sr-Co-Fe-O powder. In order to measure the catalysts' activity, simulated LFG was used for feed gas and complete oxidation reaction of methane was confirmed. Pore structure of the HFC was confirmed by SEM image and perovskite structure of single phase was analyzed by XRD. In addition, TPR analysis was performed to verify the oxygen adsorption mechanism of the HFC. Acknowledgement—The project is supported by the ‘Global Top Environment R&D Program’ in the ‘R&D Center for reduction of Non-CO₂ Greenhouse gases’ (Development and demonstration of oxygen removal technology of landfill gas) funded by Korea Ministry of Environment (ME).

Keywords: complete oxidation, greenhouse gas, hollow fiber catalyst, land fill gas, oxygen removal, perovskite catalyst

Procedia PDF Downloads 117
909 Performance Evaluation of Routing Protocol in Cognitive Radio with Multi Technological Environment

Authors: M. Yosra, A. Mohamed, T. Sami

Abstract:

Over the past few years, mobile communication technologies have seen significant evolution. This fact promoted the implementation of many systems in a multi-technological setting. From one system to another, the Quality of Service (QoS) provided to mobile consumers gets better. The growing number of normalized standards extends the available services for each consumer, moreover, most of the available radio frequencies have already been allocated, such as 3G, Wifi, Wimax, and LTE. A study by the Federal Communications Commission (FCC) found that certain frequency bands are partially occupied in particular locations and times. So, the idea of Cognitive Radio (CR) is to share the spectrum between a primary user (PU) and a secondary user (SU). The main objective of this spectrum management is to achieve a maximum rate of exploitation of the radio spectrum. In general, the CR can greatly improve the quality of service (QoS) and improve the reliability of the link. The problem will reside in the possibility of proposing a technique to improve the reliability of the wireless link by using the CR with some routing protocols. However, users declared that the links were unreliable and that it was an incompatibility with QoS. In our case, we choose the QoS parameter "bandwidth" to perform a supervised classification. In this paper, we propose a comparative study between some routing protocols, taking into account the variation of different technologies on the existing spectral bandwidth like 3G, WIFI, WIMAX, and LTE. Due to the simulation results, we observe that LTE has significantly higher availability bandwidth compared with other technologies. The performance of the OLSR protocol is better than other on-demand routing protocols (DSR, AODV and DSDV), in LTE technology because of the proper receiving of packets, less packet drop and the throughput. Numerous simulations of routing protocols have been made using simulators such as NS3.

Keywords: cognitive radio, multi technology, network simulator (NS3), routing protocol

Procedia PDF Downloads 63
908 Innovative Predictive Modeling and Characterization of Composite Material Properties Using Machine Learning and Genetic Algorithms

Authors: Hamdi Beji, Toufik Kanit, Tanguy Messager

Abstract:

This study aims to construct a predictive model proficient in foreseeing the linear elastic and thermal characteristics of composite materials, drawing on a multitude of influencing parameters. These parameters encompass the shape of inclusions (circular, elliptical, square, triangle), their spatial coordinates within the matrix, orientation, volume fraction (ranging from 0.05 to 0.4), and variations in contrast (spanning from 10 to 200). A variety of machine learning techniques are deployed, including decision trees, random forests, support vector machines, k-nearest neighbors, and an artificial neural network (ANN), to facilitate this predictive model. Moreover, this research goes beyond the predictive aspect by delving into an inverse analysis using genetic algorithms. The intent is to unveil the intrinsic characteristics of composite materials by evaluating their thermomechanical responses. The foundation of this research lies in the establishment of a comprehensive database that accounts for the array of input parameters mentioned earlier. This database, enriched with this diversity of input variables, serves as a bedrock for the creation of machine learning and genetic algorithm-based models. These models are meticulously trained to not only predict but also elucidate the mechanical and thermal conduct of composite materials. Remarkably, the coupling of machine learning and genetic algorithms has proven highly effective, yielding predictions with remarkable accuracy, boasting scores ranging between 0.97 and 0.99. This achievement marks a significant breakthrough, demonstrating the potential of this innovative approach in the field of materials engineering.

Keywords: machine learning, composite materials, genetic algorithms, mechanical and thermal proprieties

Procedia PDF Downloads 54