Search results for: partition metric
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 408

Search results for: partition metric

168 Back Extraction and Isolation of Alkaloids from Ionic Liquid-Based Extracts

Authors: Rozalina Keremedchieva, Ivan Svinyarov, Milen G. Bogdanov

Abstract:

In continuation of a research project on the application of ionic liquids (ILs) as an alternative to the conventional organic solvents used in the recovery of value added chemicals of industrial interest1-3 we developed a procedure for back extraction and isolation in pure form of the biologically active alkaloid glaucine from IL-based aqueous solutions. One of the approaches applied was the formation of two-phase systems (IL-ATPS) by the addition of kosmotropic salts to the plant extract. The ability of the salts (Na2CO3, MgSO4, (NH4)2SO4, NaH2PO4) to induce the formation of two-phase systems and the influence of pH value on the partition coefficients of glaucine was comprehensively studied. As a result, it was found that the target alkaloid is preferably partitioned into the IL-rich phase regardless of the pH value of the medium and thus shows the inapplicability of the approach used for the isolation of the target compound from the ionic liquid. However, the results obtained can be used as a platform for the development of an analytical method for the quantitative determination of low concentrations of glaucine in biological samples. We further examined the ability of a series of organic solvents such as diethyl ether, Tert-butylmethyl ether, ethyl acetate, butyl acetate, toluene, chloroform, dichloromethane to recover glaucine form raw IL-based aqueous extracts. Optimal conditions for quantitative extraction of glaucine into chloroform were found from which, after removal of the solvent and subsequent recrystallization from ethanol, the target compound was isolated in a high purity as a hydrobromide salt – The form in which it entrance as an active ingredient in various medicines.

Keywords: natural products, ionic liquids, solid-liquid extraction, liquid-liquid extraction

Procedia PDF Downloads 450
167 Supervised/Unsupervised Mahalanobis Algorithm for Improving Performance for Cyberattack Detection over Communications Networks

Authors: Radhika Ranjan Roy

Abstract:

Deployment of machine learning (ML)/deep learning (DL) algorithms for cyberattack detection in operational communications networks (wireless and/or wire-line) is being delayed because of low-performance parameters (e.g., recall, precision, and f₁-score). If datasets become imbalanced, which is the usual case for communications networks, the performance tends to become worse. Complexities in handling reducing dimensions of the feature sets for increasing performance are also a huge problem. Mahalanobis algorithms have been widely applied in scientific research because Mahalanobis distance metric learning is a successful framework. In this paper, we have investigated the Mahalanobis binary classifier algorithm for increasing cyberattack detection performance over communications networks as a proof of concept. We have also found that high-dimensional information in intermediate features that are not utilized as much for classification tasks in ML/DL algorithms are the main contributor to the state-of-the-art of improved performance of the Mahalanobis method, even for imbalanced and sparse datasets. With no feature reduction, MD offers uniform results for precision, recall, and f₁-score for unbalanced and sparse NSL-KDD datasets.

Keywords: Mahalanobis distance, machine learning, deep learning, NS-KDD, local intrinsic dimensionality, chi-square, positive semi-definite, area under the curve

Procedia PDF Downloads 50
166 Computer Simulation of Hydrogen Superfluidity through Binary Mixing

Authors: Sea Hoon Lim

Abstract:

A superfluid is a fluid of bosons that flows without resistance. In order to be a superfluid, a substance’s particles must behave like bosons, yet remain mobile enough to be considered a superfluid. Bosons are low-temperature particles that can be in all energy states at the same time. If bosons were to be cooled down, then the particles will all try to be on the lowest energy state, which is called the Bose Einstein condensation. The temperature when bosons start to matter is when the temperature has reached its critical temperature. For example, when Helium reaches its critical temperature of 2.17K, the liquid density drops and becomes a superfluid with zero viscosity. However, most materials will solidify -and thus not remain fluids- at temperatures well above the temperature at which they would otherwise become a superfluid. Only a few substances currently known to man are capable of at once remaining a fluid and manifesting boson statistics. The most well-known of these is helium and its isotopes. Because hydrogen is lighter than helium, and thus expected to manifest Bose statistics at higher temperatures than helium, one might expect hydrogen to also be a superfluid. As of today, however, no one has yet been able to produce a bulk, hydrogen superfluid. The reason why hydrogen did not form a superfluid in the past is its intermolecular interactions. As a result, hydrogen molecules are much more likely to crystallize than their helium counterparts. The key to creating a hydrogen superfluid is therefore finding a way to reduce the effect of the interactions among hydrogen molecules, postponing the solidification to lower temperature. In this work, we attempt via computer simulation to produce bulk superfluid hydrogen through binary mixing. Binary mixture is a technique of mixing two pure substances in order to avoid crystallization and enhance super fluidity. Our mixture here is KALJ H2. We then sample the partition function using this Path Integral Monte Carlo (PIMC), which is well-suited for the equilibrium properties of low-temperature bosons and captures not only the statistics but also the dynamics of Hydrogen. Via this sampling, we will then produce a time evolution of the substance and see if it exhibits superfluid properties.

Keywords: superfluidity, hydrogen, binary mixture, physics

Procedia PDF Downloads 292
165 Quantifying Spatiotemporal Patterns of Past and Future Urbanization Trends in El Paso, Texas and Their Impact on Electricity Consumption

Authors: Joanne Moyer

Abstract:

El Paso, Texas is a southwest border city that has experienced continuous growth within the last 15-years. Understanding the urban growth trends and patterns using data from the National Land Cover Database (NLCD) and landscape metrics, provides a quantitative description of growth. Past urban growth provided a basis to predict 2031 future land-use for El Paso using the CA-Markov model. As a consequence of growth, an increase in demand of resources follows. Using panel data analysis, an understanding of the relation between landscape metrics and electricity consumption is further analyzed. The studies’ findings indicate that past growth focused within three districts within the City of El Paso. The landscape metrics suggest as the city has grown, fragmentation has decreased. Alternatively, the landscape metrics for the projected 2031 land-use indicates possible fragmentation within one of these districts. Panel data suggests electricity consumption and mean patch area landscape metric are positively correlated. The study provides local decision makers to make informed decisions for policies and urban planning to ensure a future sustainable community.

Keywords: landscape metrics, CA-Markov, El Paso, Texas, panel data

Procedia PDF Downloads 104
164 Selective Extraction of Couple Nickel(II) / Cobalt(II) by a Series of Schiff Bases in Sulfate Medium, in the Chloroforme-Water

Authors: N. Belhadj, M. Hadj Youcef, T. Benabdallah, Belbachir Ibtissem, N. Boceiri

Abstract:

This work deals with the synthesis, the structural elucidation and the exploration the extracting properties of a series of ortho-hydroxy Schiff base in sulfate medium. After the synthesis and characterization of their structures, the study of their behavior in solution was carried out by pH-metric titration in different media homogeneous and heterogeneous solution. This allowed to explore and to quantify in each of these media, some of their properties in solution such as, their acid-base behavior (determination and comparison of pKa), their distribution powers (determination and comparison of logKd), and their thermodynamic constants (determining ∆H°, ΔS° and ∆G°moy) by optimizing both the temperature and ionic strength. Study of the extraction of nickel (II) and cobalt(II) separately was undertaken in the aqueous-organic system, chloroform-water. Different extraction parameters have been thus optimized such, the pH, the concentration of extractant and the ionic strength, and the extraction constants established in each case. The extracted metal complexes have been isolated and their spatial configurations elucidated. The selective extraction of the couple cobalt (II)/nickel (II) was finally performed by our series of Schiff base in the chloroforme/water.

Keywords: selective extraction, Schiff base, distribution, cobalt(II), nickel(II)

Procedia PDF Downloads 440
163 Urban Traffic: Understanding the Traffic Flow Factor Through Fluid Dynamics

Authors: Sathish Kumar Jayaraj

Abstract:

The study of urban traffic dynamics, underpinned by the principles of fluid dynamics, offers a distinct perspective to comprehend and enhance the efficiency of traffic flow within bustling cityscapes. Leveraging the concept of the Traffic Flow Factor (TFF) as an analog to the Reynolds number, this research delves into the intricate interplay between traffic density, velocity, and road category, drawing compelling parallels to fluid dynamics phenomena. By introducing the notion of Vehicle Shearing Resistance (VSR) as an analogy to dynamic viscosity, the study sheds light on the multifaceted influence of traffic regulations, lane management, and road infrastructure on the smoothness and resilience of traffic flow. The TFF equation serves as a comprehensive metric for quantifying traffic dynamics, enabling the identification of congestion hotspots, the optimization of traffic signal timings, and the formulation of data-driven traffic management strategies. The study underscores the critical significance of integrating fluid dynamics principles into the domain of urban traffic management, fostering sustainable transportation practices, and paving the way for a more seamless and resilient urban mobility ecosystem.

Keywords: traffic flow factor (TFF), urban traffic dynamics, fluid dynamics principles, vehicle shearing resistance (VSR), traffic congestion management, sustainable urban mobility

Procedia PDF Downloads 30
162 Creeping Control Strategy for Direct Shift Gearbox Based on the Investigation of Temperature Variation of the Wet Clutch

Authors: Biao Ma, Jikai Liu, Man Chen, Jianpeng Wu, Liyong Wang, Changsong Zheng

Abstract:

Proposing an appropriate control strategy is an effective and practical way to address the overheat problems of the wet multi-plate clutch in Direct Shift Gearbox under the long-time creeping condition. To do so, the temperature variation of the wet multi-plate clutch is investigated firstly by establishing a thermal resistance model for the gearbox cooling system. To calculate the generated heat flux and predict the clutch temperature precisely, the friction torque model is optimized by introducing an improved friction coefficient, which is related to the pressure, the relative speed and the temperature. After that, the heat transfer model and the reasonable friction torque model are employed by the vehicle powertrain model to construct a comprehensive co-simulation model for the Direct Shift Gearbox (DSG) vehicle. A creeping control strategy is then proposed and, to evaluate the vehicle performance, the safety temperature (250 ℃) is particularly adopted as an important metric. During the creeping process, the temperature of two clutches is always under the safety value (250 ℃), which demonstrates the effectiveness of the proposed control strategy in avoiding the thermal failures of clutches.

Keywords: creeping control strategy, direct shift gearbox, temperature variation, wet clutch

Procedia PDF Downloads 100
161 Optimising Urban Climate at Mesoscale: The Case of Floor-Area-Ratio Modelling and Energy Planning Integration

Authors: Ali Cheshmehzangi, Ayotunde Dawodu

Abstract:

In urban planning, Floor Area Ratio (FAR) of the site plays a major role in the multiplicity of performances, from humane living environments to energy performance. When one considers the astounding volume of new housing that is going to be constructed across the globe during the next few decades due to growing urbanisation (e.g. particularly in developing world), it is imperative that we have an empirically grounded grasp of which building configurations are more energy efficient. As a common planning metric, it would be helpful to know exactly how managing FAR connects with energy efficiency. Hence, this study puts together a set of modelling of various FARs for a typical residential compound and address the considerations of energy planning integration in the practice of building configuration and urban planning. Such decision makings at the planning and design stage enable us to provide pathways of optimising urban climate at mesoscale of the built environment, i.e. the neighbourhood or community level. In this study, a comparative study is conducted using Eco-Tect Software, using a case study in the City of Ningbo, China. Findings of the study contribute to identifying scenarios of various FAR use and energy planning at mesoscale. The final results contribute to studies in urban climate, from the perspectives of urban planning, energy planning, and urban modelling.

Keywords: China, energy planning, FAR, floor-area-ratio, mesoscale, urban climate, urban modelling

Procedia PDF Downloads 134
160 Cognitive Science Based Scheduling in Grid Environment

Authors: N. D. Iswarya, M. A. Maluk Mohamed, N. Vijaya

Abstract:

Grid is infrastructure that allows the deployment of distributed data in large size from multiple locations to reach a common goal. Scheduling data intensive applications becomes challenging as the size of data sets are very huge in size. Only two solutions exist in order to tackle this challenging issue. First, computation which requires huge data sets to be processed can be transferred to the data site. Second, the required data sets can be transferred to the computation site. In the former scenario, the computation cannot be transferred since the servers are storage/data servers with little or no computational capability. Hence, the second scenario can be considered for further exploration. During scheduling, transferring huge data sets from one site to another site requires more network bandwidth. In order to mitigate this issue, this work focuses on incorporating cognitive science in scheduling. Cognitive Science is the study of human brain and its related activities. Current researches are mainly focused on to incorporate cognitive science in various computational modeling techniques. In this work, the problem solving approach of human brain is studied and incorporated during the data intensive scheduling in grid environments. Here, a cognitive engine is designed and deployed in various grid sites. The intelligent agents present in CE will help in analyzing the request and creating the knowledge base. Depending upon the link capacity, decision will be taken whether to transfer data sets or to partition the data sets. Prediction of next request is made by the agents to serve the requesting site with data sets in advance. This will reduce the data availability time and data transfer time. Replica catalog and Meta data catalog created by the agents assist in decision making process.

Keywords: data grid, grid workflow scheduling, cognitive artificial intelligence

Procedia PDF Downloads 368
159 Framework to Quantify Customer Experience

Authors: Anant Sharma, Ashwin Rajan

Abstract:

Customer experience is measured today based on defining a set of metrics and KPIs, setting up thresholds and defining triggers across those thresholds. While this is an effective way of measuring against a Key Performance Indicator ( referred to as KPI in the rest of the paper ), this approach cannot capture the various nuances that make up the overall customer experience. Customers consume a product or service at various levels, which is not reflected in metrics like Customer Satisfaction or Net Promoter Score, but also across other measurements like recurring revenue, frequency of service usage, e-learning and depth of usage. Here we explore an alternative method of measuring customer experience by flipping the traditional views. Rather than rolling customers up to a metric, we roll up metrics to hierarchies and then measure customer experience. This method allows any team to quantify customer experience across multiple touchpoints in a customer’s journey. We make use of various data sources which contain information for metrics like CXSAT, NPS, Renewals, and depths of service usage collected across a customer lifecycle. This data can be mined systematically to get linkages between different data points like geographies, business groups, products and time. Additional views can be generated by blending synthetic contexts into the data to show trends and top/bottom types of reports. We have created a framework that allows us to measure customer experience using the above logic.

Keywords: analytics, customers experience, BI, business operations, KPIs, metrics

Procedia PDF Downloads 41
158 Ferromagnetic Potts Models with Multi Site Interaction

Authors: Nir Schreiber, Reuven Cohen, Simi Haber

Abstract:

The Potts model has been widely explored in the literature for the last few decades. While many analytical and numerical results concern with the traditional two site interaction model in various geometries and dimensions, little is yet known about models where more than two spins simultaneously interact. We consider a ferromagnetic four site interaction Potts model on the square lattice (FFPS), where the four spins reside in the corners of an elementary square. Each spin can take an integer value 1,2,...,q. We write the partition function as a sum over clusters consisting of monochromatic faces. When the number of faces becomes large, tracing out spin configurations is equivalent to enumerating large lattice animals. It is known that the asymptotic number of animals with k faces is governed by λᵏ, with λ ≈ 4.0626. Based on this observation, systems with q < 4 and q > 4 exhibit a second and first order phase transitions, respectively. The transition nature of the q = 4 case is borderline. For any q, a critical giant component (GC) is formed. In the finite order case, GC is simple, while it is fractal when the transition is continuous. Using simple equilibrium arguments, we obtain a (zero order) bound on the transition point. It is claimed that this bound should apply for other lattices as well. Next, taking into account higher order sites contributions, the critical bound becomes tighter. Moreover, for q > 4, if corrections due to contributions from small clusters are negligible in the thermodynamic limit, the improved bound should be exact. The improved bound is used to relate the critical point to the finite correlation length. Our analytical predictions are confirmed by an extensive numerical study of FFPS, using the Wang-Landau method. In particular, the q=4 marginal case is supported by a very ambiguous pseudo-critical finite size behavior.

Keywords: entropic sampling, lattice animals, phase transitions, Potts model

Procedia PDF Downloads 133
157 Sex Estimation Using Cervical Measurements of Molar Teeth in an Iranian Archaeological Population

Authors: Seyedeh Mandan Kazzazi, Elena Kranioti

Abstract:

In the field of human osteology, sex estimation is an important step in developing biological profile. There are a number of methods that can be used to estimate the sex of human remains varying from visual assessments to metric analysis of sexually dimorphic traits. Teeth are one of the most durable physical elements in human body that can be used for this purpose. The present study investigated the utility of cervical measurements for sex estimation through discriminant analysis. The permanent molar teeth of 75 skeletons (28 females and 52 males) from Hasanlu site in North-western Iran were studied. Cervical mesiodistal and buccolingual measurements were taken from both maxillary and mandibular first and second molars. Discriminant analysis was used to evaluate the accuracy of each diameter in assessing sex. The results showed that males had statistically larger teeth than females for maxillary and mandibular molars and both measurements (P < 0.05). The range of classification rate was from (75.7% to 85.5%) for the original and cross-validated data. The most dimorphic teeth were maxillary and mandibular second molars providing 85.5% and 83.3% correct classification rate respectively. The data generated from the present study suggested that cervical mesiodistal and buccolingual measurements of the molar teeth can be useful and reliable for sex estimation in Iranian archaeological populations.

Keywords: cervical measurements, Hasanlu, premolars, sex estimation

Procedia PDF Downloads 307
156 Riemannain Geometries Of Visual Space

Authors: Jacek Turski

Abstract:

The visual space geometries are constructed in the Riemannian geometry framework from simulated iso-disparity conics in the horizontalvisual plane of the binocular system with the asymmetric eyes (AEs). For the eyes fixating at the abathic distance, which depends on the AE’s parameters, the iso-disparity conics are frontal straight lines in physical space. For allother fixations, the iso-disparity conics consist of families of the ellipses or hyperbolas depending on both the AE’s parameters and the bifoveal fixation. However, the iso-disparity conic’s arcs are perceived in the gaze direction asthe frontal lines and are referred to as visual geodesics. Thus, geometriesof physical and visual spaces are different. A simple postulate that combines simulated iso-disparity conics with basic anatomy od the human visual system gives the relative depth for the fixation at the abathic distance that establishes the Riemann matric tensor. The resulting geodesics are incomplete in the gaze direction and, therefore, give thefinite distances to the horizon that depend on the AE’s parameters. Moreover, the curvature vanishes in this eyes posture such that visual space is flat. For all other fixations, only the sign of the curvature canbe inferred from the global behavior of the simulated iso-disparity conics: the curvature is positive for the elliptic iso-disparity curves and negative for the hyperbolic iso-disparity curves.

Keywords: asymmetric eye model, iso-disparity conics, metric tensor, geodesics, curvature

Procedia PDF Downloads 123
155 Reducing Greenhouse Gass Emissions by Recyclable Material Bank Project of Universities in Central Region of Thailand

Authors: Ronbanchob Apiratikul

Abstract:

This research studied recycled waste by the Recyclable Material Bank Project of 4 universities in the central region of Thailand for the evaluation of reducing greenhouse gas emissions compared with landfilling activity during July 2012 to June 2013. The results showed that the projects collected total amount of recyclable wastes of about 911,984.80 kilograms. Office paper had the largest amount among these recycled wastes (50.68% of total recycled waste). Groups of recycled waste can be prioritized from high to low according to their amount as paper, plastic, glass, mixed recyclables, and metal, respectively. The project reduced greenhouse gas emissions equivalent to about 2814.969 metric tons of carbon dioxide. The most significant recycled waste that affects the reduction of greenhouse gas emissions is office paper which is 70.16% of total reduced greenhouse gasses emission. According to amount of reduced greenhouse gasses emission, groups of recycled waste can be prioritized from high to low significances as paper, plastic, metals, mixed recyclables, and glass, respectively.

Keywords: recycling, garbage bank, waste management, recyclable wastes, greenhouse gases

Procedia PDF Downloads 395
154 Development of a Technology Assessment Model by Patents and Customers' Review Data

Authors: Kisik Song, Sungjoo Lee

Abstract:

Recent years have seen an increasing number of patent disputes due to excessive competition in the global market and a reduced technology life-cycle; this has increased the risk of investment in technology development. While many global companies have started developing a methodology to identify promising technologies and assess for decisions, the existing methodology still has some limitations. Post hoc assessments of the new technology are not being performed, especially to determine whether the suggested technologies turned out to be promising. For example, in existing quantitative patent analysis, a patent’s citation information has served as an important metric for quality assessment, but this analysis cannot be applied to recently registered patents because such information accumulates over time. Therefore, we propose a new technology assessment model that can replace citation information and positively affect technological development based on post hoc analysis of the patents for promising technologies. Additionally, we collect customer reviews on a target technology to extract keywords that show the customers’ needs, and we determine how many keywords are covered in the new technology. Finally, we construct a portfolio (based on a technology assessment from patent information) and a customer-based marketability assessment (based on review data), and we use them to visualize the characteristics of the new technologies.

Keywords: technology assessment, patents, citation information, opinion mining

Procedia PDF Downloads 441
153 Deep Routing Strategy: Deep Learning based Intelligent Routing in Software Defined Internet of Things.

Authors: Zabeehullah, Fahim Arif, Yawar Abbas

Abstract:

Software Defined Network (SDN) is a next genera-tion networking model which simplifies the traditional network complexities and improve the utilization of constrained resources. Currently, most of the SDN based Internet of Things(IoT) environments use traditional network routing strategies which work on the basis of max or min metric value. However, IoT network heterogeneity, dynamic traffic flow and complexity demands intelligent and self-adaptive routing algorithms because traditional routing algorithms lack the self-adaptions, intelligence and efficient utilization of resources. To some extent, SDN, due its flexibility, and centralized control has managed the IoT complexity and heterogeneity but still Software Defined IoT (SDIoT) lacks intelligence. To address this challenge, we proposed a model called Deep Routing Strategy (DRS) which uses Deep Learning algorithm to perform routing in SDIoT intelligently and efficiently. Our model uses real-time traffic for training and learning. Results demonstrate that proposed model has achieved high accuracy and low packet loss rate during path selection. Proposed model has also outperformed benchmark routing algorithm (OSPF). Moreover, proposed model provided encouraging results during high dynamic traffic flow.

Keywords: SDN, IoT, DL, ML, DRS

Procedia PDF Downloads 84
152 Analyzing Sociocultural Factors Shaping Architects’ Construction Material Choices: The Case of Jordan

Authors: Maiss Razem

Abstract:

The construction sector is considered a major consumer of materials that undergoes processes of extraction, processing, transportation, and maintaining when used in buildings. Several metrics have been devised to capture the environmental impact of the materials consumed during construction using lifecycle thinking. Rarely has the materiality of this sector been explored qualitatively and systemically. This paper aims to explore socio-cultural forces that drive the use of certain materials in the Jordanian construction industry, using practice theory as a heuristic method of analysis, more specifically Shove et al. three-element model. By conducting semi-structured interviews with architects, the results unravel contextually embedded routines when determining qualities of three materialities highlighted herein; stone, glass and spatial openness. The study highlights the inadequacy of only using efficiency as a quantitative metric of sustainable materials and argues for the need to link material consumption with socio-economic, cultural, and aesthetic driving forces. The operationalization of practice theory by tracing materials’ lifetimes as they integrate with competencies and meanings captures dynamic engagements through the analyzed routines of actors in the construction practice. This study can offer policymakers better-nuanced representation to green this sector beyond efficiency rhetoric and quantitative metrics.

Keywords: architects' practices, construction materials, Jordan, practice theory

Procedia PDF Downloads 145
151 Relevance Feedback within CBIR Systems

Authors: Mawloud Mosbah, Bachir Boucheham

Abstract:

We present here the results for a comparative study of some techniques, available in the literature, related to the relevance feedback mechanism in the case of a short-term learning. Only one method among those considered here is belonging to the data mining field which is the K-Nearest Neighbours Algorithm (KNN) while the rest of the methods is related purely to the information retrieval field and they fall under the purview of the following three major axes: Shifting query, Feature Weighting and the optimization of the parameters of similarity metric. As a contribution, and in addition to the comparative purpose, we propose a new version of the KNN algorithm referred to as an incremental KNN which is distinct from the original version in the sense that besides the influence of the seeds, the rate of the actual target image is influenced also by the images already rated. The results presented here have been obtained after experiments conducted on the Wang database for one iteration and utilizing colour moments on the RGB space. This compact descriptor, Colour Moments, is adequate for the efficiency purposes needed in the case of interactive systems. The results obtained allow us to claim that the proposed algorithm proves good results; it even outperforms a wide range of techniques available in the literature.

Keywords: CBIR, category search, relevance feedback, query point movement, standard Rocchio’s formula, adaptive shifting query, feature weighting, original KNN, incremental KNN

Procedia PDF Downloads 253
150 Collective Intelligence-Based Early Warning Management for Agriculture

Authors: Jarbas Lopes Cardoso Jr., Frederic Andres, Alexandre Guitton, Asanee Kawtrakul, Silvio E. Barbin

Abstract:

The important objective of the CyberBrain Mass Agriculture Alarm Acquisition and Analysis (CBMa4) project is to minimize the impacts of diseases and disasters on rice cultivation. For example, early detection of insects will reduce the volume of insecticides that is applied to the rice fields through the use of CBMa4 platform. In order to reach this goal, two major factors need to be considered: (1) the social network of smart farmers; and (2) the warning data alarm acquisition and analysis component. This paper outlines the process for collecting the warning and improving the decision-making result to the warning. It involves two sub-processes: the warning collection and the understanding enrichment. Human sensors combine basic suitable data processing techniques in order to extract warning related semantic according to collective intelligence. We identify each warning by a semantic content called 'warncons' with multimedia metaphors and metadata related to these metaphors. It is important to describe the metric to measuring the relation among warncons. With this knowledge, a collective intelligence-based decision-making approach determines the action(s) to be launched regarding one or a set of warncons.

Keywords: agricultural engineering, warning systems, social network services, context awareness

Procedia PDF Downloads 348
149 Wireless Sensor Network for Forest Fire Detection and Localization

Authors: Tarek Dandashi

Abstract:

WSNs may provide a fast and reliable solution for the early detection of environment events like forest fires. This is crucial for alerting and calling for fire brigade intervention. Sensor nodes communicate sensor data to a host station, which enables a global analysis and the generation of a reliable decision on a potential fire and its location. A WSN with TinyOS and nesC for the capturing and transmission of a variety of sensor information with controlled source, data rates, duration, and the records/displaying activity traces is presented. We propose a similarity distance (SD) between the distribution of currently sensed data and that of a reference. At any given time, a fire causes diverging opinions in the reported data, which alters the usual data distribution. Basically, SD consists of a metric on the Cumulative Distribution Function (CDF). SD is designed to be invariant versus day-to-day changes of temperature, changes due to the surrounding environment, and normal changes in weather, which preserve the data locality. Evaluation shows that SD sensitivity is quadratic versus an increase in sensor node temperature for a group of sensors of different sizes and neighborhood. Simulation of fire spreading when ignition is placed at random locations with some wind speed shows that SD takes a few minutes to reliably detect fires and locate them. We also discuss the case of false negative and false positive and their impact on the decision reliability.

Keywords: forest fire, WSN, wireless sensor network, algortihm

Procedia PDF Downloads 239
148 Energy Efficient Clustering with Reliable and Load-Balanced Multipath Routing for Wireless Sensor Networks

Authors: Alamgir Naushad, Ghulam Abbas, Shehzad Ali Shah, Ziaul Haq Abbas

Abstract:

Unlike conventional networks, it is particularly challenging to manage resources efficiently in Wireless Sensor Networks (WSNs) due to their inherent characteristics, such as dynamic network topology and limited bandwidth and battery power. To ensure energy efficiency, this paper presents a routing protocol for WSNs, namely, Enhanced Hybrid Multipath Routing (EHMR), which employs hierarchical clustering and proposes a next hop selection mechanism between nodes according to a maximum residual energy metric together with a minimum hop count. Load-balancing of data traffic over multiple paths is achieved for a better packet delivery ratio and low latency rate. Reliability is ensured in terms of higher data rate and lower end-to-end delay. EHMR also enhances the fast-failure recovery mechanism to recover a failed path. Simulation results demonstrate that EHMR achieves a higher packet delivery ratio, reduced energy consumption per-packet delivery, lower end-to-end latency, and reduced effect of data rate on packet delivery ratio when compared with eminent WSN routing protocols.

Keywords: energy efficiency, load-balancing, hierarchical clustering, multipath routing, wireless sensor networks

Procedia PDF Downloads 47
147 Simplified Linear Regression Model to Quantify the Thermal Resilience of Office Buildings in Three Different Power Outage Day Times

Authors: Nagham Ismail, Djamel Ouahrani

Abstract:

Thermal resilience in the built environment reflects the building's capacity to adapt to extreme climate changes. In hot climates, power outages in office buildings pose risks to the health and productivity of workers. Therefore, it is of interest to quantify the thermal resilience of office buildings by developing a user-friendly simplified model. This simplified model begins with creating an assessment metric of thermal resilience that measures the duration between the power outage and the point at which the thermal habitability condition is compromised, considering different power interruption times (morning, noon, and afternoon). In this context, energy simulations of an office building are conducted for Qatar's summer weather by changing different parameters that are related to the (i) wall characteristics, (ii) glazing characteristics, (iii) load, (iv) orientation and (v) air leakage. The simulation results are processed using SPSS to derive linear regression equations, aiding stakeholders in evaluating the performance of commercial buildings during different power interruption times. The findings reveal the significant influence of glazing characteristics on thermal resilience, with the morning power outage scenario posing the most detrimental impact in terms of the shortest duration before compromising thermal resilience.

Keywords: thermal resilience, thermal envelope, energy modeling, building simulation, thermal comfort, power disruption, extreme weather

Procedia PDF Downloads 39
146 Visualization and Performance Measure to Determine Number of Topics in Twitter Data Clustering Using Hybrid Topic Modeling

Authors: Moulana Mohammed

Abstract:

Topic models are widely used in building clusters of documents for more than a decade, yet problems occurring in choosing optimal number of topics. The main problem is the lack of a stable metric of the quality of topics obtained during the construction of topic models. The authors analyzed from previous works, most of the models used in determining the number of topics are non-parametric and quality of topics determined by using perplexity and coherence measures and concluded that they are not applicable in solving this problem. In this paper, we used the parametric method, which is an extension of the traditional topic model with visual access tendency for visualization of the number of topics (clusters) to complement clustering and to choose optimal number of topics based on results of cluster validity indices. Developed hybrid topic models are demonstrated with different Twitter datasets on various topics in obtaining the optimal number of topics and in measuring the quality of clusters. The experimental results showed that the Visual Non-negative Matrix Factorization (VNMF) topic model performs well in determining the optimal number of topics with interactive visualization and in performance measure of the quality of clusters with validity indices.

Keywords: interactive visualization, visual mon-negative matrix factorization model, optimal number of topics, cluster validity indices, Twitter data clustering

Procedia PDF Downloads 107
145 Proposed Framework based on Classification of Vertical Handover Decision Strategies in Heterogeneous Wireless Networks

Authors: Shidrokh Goudarzi, Wan Haslina Hassan

Abstract:

Heterogeneous wireless networks are converging towards an all-IP network as part of the so-called next-generation network. In this paradigm, different access technologies need to be interconnected; thus, vertical handovers or vertical handoffs are necessary for seamless mobility. In this paper, we conduct a review of existing vertical handover decision-making mechanisms that aim to provide ubiquitous connectivity to mobile users. To offer a systematic comparison, we categorize these vertical handover measurement and decision structures based on their respective methodology and parameters. Subsequently, we analyze several vertical handover approaches in the literature and compare them according to their advantages and weaknesses. The paper compares the algorithms based on the network selection methods, complexity of the technologies used and efficiency in order to introduce our vertical handover decision framework. We find that vertical handovers on heterogeneous wireless networks suffer from the lack of a standard and efficient method to satisfy both user and network quality of service requirements at different levels including architectural, decision-making and protocols. Also, the consolidation of network terminal, cross-layer information, multi packet casting and intelligent network selection algorithm appears to be an optimum solution for achieving seamless service continuity in order to facilitate seamless connectivity.

Keywords: heterogeneous wireless networks, vertical handovers, vertical handover metric, decision-making algorithms

Procedia PDF Downloads 366
144 Non Interferometric Quantitative Phase Imaging of Yeast Cells

Authors: P. Praveen Kumar, P. Vimal Prabhu, Renu John

Abstract:

In biology most microscopy specimens, in particular living cells are transparent. In cell imaging, it is hard to create an image of a cell which is transparent with a very small refractive index change with respect to the surrounding media. Various techniques like addition of staining and contrast agents, markers have been applied in the past for creating contrast. Many of the staining agents or markers are not applicable to live cell imaging as they are toxic. In this paper, we report theoretical and experimental results from quantitative phase imaging of yeast cells with a commercial bright field microscope. We reconstruct the phase of cells non-interferometrically based on the transport of intensity equations (TIE). This technique estimates the axial derivative from positive through-focus intensity measurements. This technique allows phase imaging using a regular microscope with white light illumination. We demonstrate nano-metric depth sensitivity in imaging live yeast cells using this technique. Experimental results will be shown in the paper demonstrating the capability of the technique in 3-D volume estimation of living cells. This real-time imaging technique would be highly promising in real-time digital pathology applications, screening of pathogens and staging of diseases like malaria as it does not need any pre-processing of samples.

Keywords: axial derivative, non-interferometric imaging, quantitative phase imaging, transport of intensity equation

Procedia PDF Downloads 358
143 Identifying Knowledge Gaps in Incorporating Toxicity of Particulate Matter Constituents for Developing Regulatory Limits on Particulate Matter

Authors: Ananya Das, Arun Kumar, Gazala Habib, Vivekanandan Perumal

Abstract:

Regulatory bodies has proposed limits on Particulate Matter (PM) concentration in air; however, it does not explicitly indicate the incorporation of effects of toxicities of constituents of PM in developing regulatory limits. This study aimed to provide a structured approach to incorporate toxic effects of components in developing regulatory limits on PM. A four-step human health risk assessment framework consists of - (1) hazard identification (parameters: PM and its constituents and their associated toxic effects on health), (2) exposure assessment (parameters: concentrations of PM and constituents, information on size and shape of PM; fate and transport of PM and constituents in respiratory system), (3) dose-response assessment (parameters: reference dose or target toxicity dose of PM and its constituents), and (4) risk estimation (metric: hazard quotient and/or lifetime incremental risk of cancer as applicable). Then parameters required at every step were obtained from literature. Using this information, an attempt has been made to determine limits on PM using component-specific information. An example calculation was conducted for exposures of PM2.5 and its metal constituents from Indian ambient environment to determine limit on PM values. Identified data gaps were: (1) concentrations of PM and its constituents and their relationship with sampling regions, (2) relationship of toxicity of PM with its components.

Keywords: air, component-specific toxicity, human health risks, particulate matter

Procedia PDF Downloads 282
142 Enhancing Flood Modeling: Unveiling the Role of Hazard Parameters in Building Vulnerability

Authors: Mohammad Shoraka, Raulina Wojtkiewicz, Karthik Ramanathan

Abstract:

Following the devastating summer 2021 floods in Germany, catastrophe modelers realized that hazard parameters, such as flow velocity, flood duration, and debris flow, play a significant role in capturing the overall damage potential of such events. Accounting for the location-specific static depth as the only hazard intensity metric may lead to a substantial underestimation of the vulnerability of building stock and, eventually, the loss potential of such catastrophic events. As the flow velocity increases, the hydrodynamic forces acting on various building components are amplified. Longer flood duration leads to water permeating porous components, incurring additional cleanup costs that contribute to an overall increase in damage. Debris flow possesses the power to erode extensive sections of buildings, thus substantially augmenting the extent of losses. This paper introduces four flow velocity classes, ranging from no flow velocity to major velocity, along with two flood duration classes: short and long, in estimating the vulnerability of the building stock. Additionally, the study examines the impact of the presence of debris flow and its role in exacerbating flood damage. The paper delves into the effects of each of these parameters on building component damageability and their collective impact on the overall building vulnerability.

Keywords: catastrophe modeling, building vulnerability, hazard parameters, component damage function

Procedia PDF Downloads 36
141 Pollution Associated with Combustion in Stove to Firewood (Eucalyptus) and Pellet (Radiate Pine): Effect of UVA Irradiation

Authors: Y. Vásquez, F. Reyes, P. Oyola, M. Rubio, J. Muñoz, E. Lissi

Abstract:

In several cities in Chile, there is significant urban pollution, particularly in Santiago and in cities in the south where biomass is used as fuel in heating and cooking in a large proportion of homes. This has generated interest in knowing what factors can be modulated to control the level of pollution. In this project was conditioned and set up a photochemical chamber (14m3) equipped with gas monitors e.g. CO, NOX, O3, others and PM monitors e.g. dustrack, DMPS, Harvard impactors, etc. This volume could be exposed to UVA lamps, producing a spectrum similar to that generated by the sun. In this chamber, PM and gas emissions associated with biomass burning were studied in the presence and absence of radiation. From the comparative analysis of wood stove (eucalyptus globulus) and pellet (radiata pine), it can be concluded that, in the first approximation, 9-nitroanthracene, 4-nitropyrene, levoglucosan, water soluble potassium and CO present characteristics of the tracers. However, some of them show properties that interfere with this possibility. For example, levoglucosan is decomposed by radiation. The 9-nitroanthracene, 4-nitropyrene are emitted and formed under radiation. The 9-nitroanthracene has a vapor pressure that involves a partition involving the gas phase and particulate matter. From this analysis, it can be concluded that K+ is compound that meets the properties known to be tracer. The PM2.5 emission measured in the automatic pellet stove that was used in this thesis project was two orders of magnitude smaller than that registered by the manual wood stove. This has led to encouraging the use of pellet stoves in indoor heating, particularly in south-central Chile. However, it should be considered, while the use of pellet is not without problems, due to pellet stove generate high concentrations of Nitro-HAP's (secondary organic contaminants). In particular, 4-nitropyrene, compound of high toxicity, also primary and secondary particulate matter, associated with pellet burning produce a decrease in the size distribution of the PM, which leads to a depth penetration of the particles and their toxic components in the respiratory system.

Keywords: biomass burning, photochemical chamber, particulate matter, tracers

Procedia PDF Downloads 152
140 Enhancement of Underwater Haze Image with Edge Reveal Using Pixel Normalization

Authors: M. Dhana Lakshmi, S. Sakthivel Murugan

Abstract:

As light passes from source to observer in the water medium, it is scattered by the suspended particulate matter. This scattering effect will plague the captured images with non-uniform illumination, blurring details, halo artefacts, weak edges, etc. To overcome this, pixel normalization with an Amended Unsharp Mask (AUM) filter is proposed to enhance the degraded image. To validate the robustness of the proposed technique irrespective of atmospheric light, the considered datasets are collected on dual locations. For those images, the maxima and minima pixel intensity value is computed and normalized; then the AUM filter is applied to strengthen the blurred edges. Finally, the enhanced image is obtained with good illumination and contrast. Thus, the proposed technique removes the effect of scattering called de-hazing and restores the perceptual information with enhanced edge detail. Both qualitative and quantitative analyses are done on considering the standard non-reference metric called underwater image sharpness measure (UISM), and underwater image quality measure (UIQM) is used to measure color, sharpness, and contrast for both of the location images. It is observed that the proposed technique has shown overwhelming performance compared to other deep-based enhancement networks and traditional techniques in an adaptive manner.

Keywords: underwater drone imagery, pixel normalization, thresholding, masking, unsharp mask filter

Procedia PDF Downloads 161
139 Optimization of Topology-Aware Job Allocation on a High-Performance Computing Cluster by Neural Simulated Annealing

Authors: Zekang Lan, Yan Xu, Yingkun Huang, Dian Huang, Shengzhong Feng

Abstract:

Jobs on high-performance computing (HPC) clusters can suffer significant performance degradation due to inter-job network interference. Topology-aware job allocation problem (TJAP) is such a problem that decides how to dedicate nodes to specific applications to mitigate inter-job network interference. In this paper, we study the window-based TJAP on a fat-tree network aiming at minimizing the cost of communication hop, a defined inter-job interference metric. The window-based approach for scheduling repeats periodically, taking the jobs in the queue and solving an assignment problem that maps jobs to the available nodes. Two special allocation strategies are considered, i.e., static continuity assignment strategy (SCAS) and dynamic continuity assignment strategy (DCAS). For the SCAS, a 0-1 integer programming is developed. For the DCAS, an approach called neural simulated algorithm (NSA), which is an extension to simulated algorithm (SA) that learns a repair operator and employs them in a guided heuristic search, is proposed. The efficacy of NSA is demonstrated with a computational study against SA and SCIP. The results of numerical experiments indicate that both the model and algorithm proposed in this paper are effective.

Keywords: high-performance computing, job allocation, neural simulated annealing, topology-aware

Procedia PDF Downloads 69