Search results for: Object- Based
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11635

Search results for: Object- Based

745 A Consumption-Based Hybrid Life Cycle Assessment of Carbon Footprints in California: High Footprints in Small Urban Households

Authors: Jukka Heinonen

Abstract:

Higher density reduces distances, private car dependency and thus reduces greenhouse gas emissions (GHGs). As a result, increased density has been given a central role among urban development targets. However, it is not just travel behavior that changes along with density. Rather, the consumption patterns, or overall lifestyles, change along with changing urban structure, particularly with changing housing types and consumption opportunities. Furthermore, elevated consumption of services, more frequent flying and less intra-household sharing have been shown to potentially outweigh the gains from reduced driving in more dense urban settlements. In this study, the geography of carbon footprints (CFs) in California is analyzed paying close attention to the household size differences and the resulting economies-of-scale advantages and disadvantages. A hybrid life cycle assessment (LCA) framework is employed together with consumer expenditure data to assess the CFs. According to the study, small urban households have the highest CFs in California. Their transport related emissions are significantly lower than those of the residents of less urbanized areas, but higher emissions from other consumption categories, together with the low degree of sharing of goods, overweigh the gains. Two functional units, per capita and per household, are used to analyze the CFs and to demonstrate the importance of household size. The lifestyle impacts visible through the consumption data are also discussed. The study suggests that there are still significant gaps in our understanding of the premises of low-carbon human settlements.

Keywords: Carbon footprint, life cycle assessment, consumption, lifestyle, household size, economies-of-scale.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1227
744 Computational Identification of Bacterial Communities

Authors: Eleftheria Tzamali, Panayiota Poirazi, Ioannis G. Tollis, Martin Reczko

Abstract:

Stable bacterial polymorphism on a single limiting resource may appear if between the evolved strains metabolic interactions take place that allow the exchange of essential nutrients [8]. Towards an attempt to predict the possible outcome of longrunning evolution experiments, a network based on the metabolic capabilities of homogeneous populations of every single gene knockout strain (nodes) of the bacterium E. coli is reconstructed. Potential metabolic interactions (edges) are allowed only between strains of different metabolic capabilities. Bacterial communities are determined by finding cliques in this network. Growth of the emerged hypothetical bacterial communities is simulated by extending the metabolic flux balance analysis model of Varma et al [2] to embody heterogeneous cell population growth in a mutual environment. Results from aerobic growth on 10 different carbon sources are presented. The upper bounds of the diversity that can emerge from single-cloned populations of E. coli such as the number of strains that appears to metabolically differ from most strains (highly connected nodes), the maximum clique size as well as the number of all the possible communities are determined. Certain single gene deletions are identified to consistently participate in our hypothetical bacterial communities under most environmental conditions implying a pattern of growth-condition- invariant strains with similar metabolic effects. Moreover, evaluation of all the hypothetical bacterial communities under growth on pyruvate reveals heterogeneous populations that can exhibit superior growth performance when compared to the performance of the homogeneous wild-type population.

Keywords: Bacterial polymorphism, clique identification, dynamic FBA, evolution, metabolic interactions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1381
743 Performance of the Aptima® HIV-1 Quant Dx Assay on the Panther System

Authors: Siobhan O’Shea, Sangeetha Vijaysri Nair, Hee Cheol Kim, Charles Thomas Nugent, Cheuk Yan William Tong, Sam Douthwaite, Andrew Worlock

Abstract:

The Aptima® HIV-1 Quant Dx Assay is a fully automated assay on the Panther system. It is based on Transcription- Mediated Amplification and real time detection technologies. This assay is intended for monitoring HIV-1 viral load in plasma specimens and for the detection of HIV-1 in plasma and serum specimens. Nine-hundred and seventy nine specimens selected at random from routine testing at St Thomas’ Hospital, London were anonymised and used to compare the performance of the Aptima HIV-1 Quant Dx assay and Roche COBAS® AmpliPrep/COBAS® TaqMan® HIV-1 Test, v2.0. Two-hundred and thirty four specimens gave quantitative HIV-1 viral load results in both assays. The quantitative results reported by the Aptima Assay were comparable to those reported by the Roche COBAS AmpliPrep/COBAS TaqMan HIV-1 Test, v2.0 with a linear regression slope of 1.04 and an intercept on -0.097. The Aptima assay detected HIV-1 in more samples than the COBAS assay. This was not due to lack of specificity of the Aptima assay because this assay gave 99.83% specificity on testing plasma specimens from 600 HIV-1 negative individuals. To understand the reason for this higher detection rate a side-by-side comparison of low level panels made from the HIV-1 3rd international standard (NIBSC10/152) and clinical samples of various subtypes were tested in both assays. The Aptima assay was more sensitive than the COBAS assay. The good sensitivity, specificity and agreement with other commercial assays make the HIV-1 Quant Dx Assay appropriate for both viral load monitoring and detection of HIV-1 infections.

Keywords: HIV viral load, Aptima, Roche, Panther system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3217
742 Machine Learning Facing Behavioral Noise Problem in an Imbalanced Data Using One Side Behavioral Noise Reduction: Application to a Fraud Detection

Authors: Salma El Hajjami, Jamal Malki, Alain Bouju, Mohammed Berrada

Abstract:

With the expansion of machine learning and data mining in the context of Big Data analytics, the common problem that affects data is class imbalance. It refers to an imbalanced distribution of instances belonging to each class. This problem is present in many real world applications such as fraud detection, network intrusion detection, medical diagnostics, etc. In these cases, data instances labeled negatively are significantly more numerous than the instances labeled positively. When this difference is too large, the learning system may face difficulty when tackling this problem, since it is initially designed to work in relatively balanced class distribution scenarios. Another important problem, which usually accompanies these imbalanced data, is the overlapping instances between the two classes. It is commonly referred to as noise or overlapping data. In this article, we propose an approach called: One Side Behavioral Noise Reduction (OSBNR). This approach presents a way to deal with the problem of class imbalance in the presence of a high noise level. OSBNR is based on two steps. Firstly, a cluster analysis is applied to groups similar instances from the minority class into several behavior clusters. Secondly, we select and eliminate the instances of the majority class, considered as behavioral noise, which overlap with behavior clusters of the minority class. The results of experiments carried out on a representative public dataset confirm that the proposed approach is efficient for the treatment of class imbalances in the presence of noise.

Keywords: Machine learning, Imbalanced data, Data mining, Big data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1137
741 Rear Seat Belt Use in Developing Countries: A Case Study from the United Arab Emirates

Authors: Salaheddine Bendak, Sara S. Alnaqbi

Abstract:

The seat belt is a vital tool in improving traffic safety conditions and minimising injuries due to traffic accidents. Most developing countries are facing a big problems associated with the human and financial losses due to traffic accidents. One way to minimise these losses is the use of seat belts by passengers both in the front and rear seats of a vehicle; however, at the same time, close to nothing is known about the rates of seat belt utilisation among rear seat passengers in many developing countries. Therefore, there is a need to estimate these rates in order to know the extent of this problem and how people interact with traffic safety measures like seat belts and find demographic characteristics that contribute to wearing or non-wearing of seat belts with the aim of finding solutions to improve wearing rates. In this paper, an observational study was done to gather data on restraints use in motor vehicle rear seats in eight observational stations in a rapidly developing country, the United Arab Emirates (UAE), and estimate a use rate for the whole country. Also, a questionnaire was used in order to study demographic characteristics affecting the wearing of seatbelts in rear seats. Results of the observational study showed that the overall wearing/usage rate was 12.3%, which is considered very low when compared to other countries. Survey results show that single, male, less educated passengers from Arab and South Asian backgrounds use seat belts reportedly less than others. Finally, solutions are put forward to improve this wearing rate based on the results of this study.

Keywords: Seat belts, traffic crashes, United Arab Emirates, rear seats.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1055
740 Evaluation of Produced Water Treatment Using Advanced Oxidation Processes and Sodium Ferrate(VI)

Authors: Erica T. R. Mendonça, Caroline M. B. de Araujo, Filho, Osvaldo Chiavone, Sobrinho, Maurício A. da Motta

Abstract:

Oil and gas exploration is an essential activity for modern society, although the supply of its global demand has caused enough damage to the environment, mainly due to produced water generation, which is an effluent associated with the oil and gas produced during oil extraction. It is the aim of this study to evaluate the treatment of produced water, in order to reduce its oils and greases content (OG), by using flotation as a pre-treatment, combined with oxidation for the remaining organic load degradation. Thus, there has been tested Advanced Oxidation Process (AOP) using both Fenton and photo-Fenton reactions, as well as a chemical oxidation treatment using sodium ferrate(VI), Na2[FeO4], as a strong oxidant. All the studies were carried out using real samples of produced water from petroleum industry. The oxidation process using ferrate(VI) ion was studied based on factorial experimental designs. The factorial design was used in order to study how the variables pH, temperature and concentration of Na2[FeO4] influences the O&G levels. For the treatment using ferrate(VI) ion, the results showed that the best operating point is obtained when the temperature is 28 °C, pH 3, and a 2000 mg.L-1 solution of Na2[FeO4] is used. This experiment has achieved a final O&G level of 4.7 mg.L-1, which means 94% percentage removal efficiency of oils and greases. Comparing Fenton and photo-Fenton processes, it was observed that the Fenton reaction did not provide good reduction of O&G (around 20% only). On the other hand, a degradation of approximately 80.5% of oil and grease was obtained after a period of seven hours of treatment using photo-Fenton process, which indicates that the best process combination has occurred between the flotation and the photo-Fenton reaction using solar radiation, with an overall removal efficiency of O&G of approximately 89%.

Keywords: Advanced oxidation process, ferrate(VI) ion, oils and greases removal, produced water treatment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1794
739 Comparative Study of Pasting Properties of High Fibre Plantain Based Flour Intended for Diabetic Food (Fufu)

Authors: C. C. Okafor, E. E. Ugwu

Abstract:

A comparative study on the feasibility of producing instant high fibre plantain flour for diabetic fufu by blending soy residence with different plantain (Musa spp) varieties (Horn, false Horn and French), all sieved at 60 mesh, mixed in ratio of 60:40 was analyzed for their passing properties using standard analytical method. Results show that VIIIS60 had the highest peak viscosity (303.75 RVU), Trough value (182.08 RVU), final viscosity (284.50 RVU), and lowest in breakdown viscosity (79.58 RVU), set back value (88.17 RVU), peak time (4.36min), pasting temperature (81.18°C) and differed significantly (p <0.05) from other samples. VIS60 had the lowest in peak viscosity (192.25 RVU), Trough value (112.67 RVU), final viscosity (211.92 RVU), but highest in breakdown viscosity (121.61 RVU), peak time (4.66min) pasting temperature (82.35°C), and differed significantly (p <0.05), from other samples. VIIS60 had the medium peak viscosity (236.67 RVU), Trough value (116.58 RVU), Break down viscosity (120:08 RVU), set back viscosity (167.92 RVU), peak time (4.39min), pasting temp (81.44°C) and differed significantly (p <0.05) from other samples. High final viscosity and low set back values of the French variety with soy residue blended at 60 mesh particle size recommends this french variety and fibre composition as optimum for production of instant plantain soy residue flour blend for production of diabetic fufu. 

Keywords: Plantain, soy residue pasting properties particle size.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2372
738 Creep Behaviour of Heterogeneous Timber-UHPFRC Beams Assembled by Bonding: Experimental and Analytical Investigation

Authors: K. Kong, E. Ferrier, L. Michel

Abstract:

The purpose of this research was to investigate the creep behaviour of the heterogeneous Timber-UHPFRC beams. New developments have been done to further improve the structural performance, such as strengthening of the timber (glulam) beam by bonding composite material combine with an ultra-high performance fibre reinforced concrete (UHPFRC) internally reinforced with or without carbon fibre reinforced polymer (CFRP) bars. However, in the design of wooden structures, in addition to the criteria of strengthening and stiffness, deformability due to the creep of wood, especially in horizontal elements, is also a design criterion. Glulam, UHPFRC and CFRP may be an interesting composite mix to respond to the issue of creep behaviour of composite structures made of different materials with different rheological properties. In this paper, we describe an experimental and analytical investigation of the creep performance of the glulam-UHPFRC-CFRP beams assembled by bonding. The experimental investigations creep behaviour was conducted for different environments: in- and outside under constant loading for approximately a year. The measured results are compared with numerical ones obtained by an analytical model. This model was developed to predict the creep response of the glulam-UHPFRCCFRP beams based on the creep characteristics of the individual components. The results show that heterogeneous glulam-UHPFRC beams provide an improvement in both the strengthening and stiffness, and can also effectively reduce the creep deflection of wooden beams.

Keywords: Carbon fibre-reinforced polymer (CFRP) bars, creep behaviour, glulam, ultra-high performance fibre reinforced concrete (UHPFRC).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2544
737 An Intelligent Water Drop Algorithm for Solving Economic Load Dispatch Problem

Authors: S. Rao Rayapudi

Abstract:

Economic Load Dispatch (ELD) is a method of determining the most efficient, low-cost and reliable operation of a power system by dispatching available electricity generation resources to supply load on the system. The primary objective of economic dispatch is to minimize total cost of generation while honoring operational constraints of available generation resources. In this paper an intelligent water drop (IWD) algorithm has been proposed to solve ELD problem with an objective of minimizing the total cost of generation. Intelligent water drop algorithm is a swarm-based natureinspired optimization algorithm, which has been inspired from natural rivers. A natural river often finds good paths among lots of possible paths in its ways from source to destination and finally find almost optimal path to their destination. These ideas are embedded into the proposed algorithm for solving economic load dispatch problem. The main advantage of the proposed technique is easy is implement and capable of finding feasible near global optimal solution with less computational effort. In order to illustrate the effectiveness of the proposed method, it has been tested on 6-unit and 20-unit test systems with incremental fuel cost functions taking into account the valve point-point loading effects. Numerical results shows that the proposed method has good convergence property and better in quality of solution than other algorithms reported in recent literature.

Keywords: Economic load dispatch, Transmission loss, Optimization, Valve point loading, Intelligent Water Drop Algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3630
736 Interactive Effects in Blended Learning Mode: Exploring Hybrid Data Sources and Iterative Linkages

Authors: Hock Chuan, Lim

Abstract:

This paper presents an approach for identifying interactive effects using Network Science (NS) supported by Social Network Analysis (SNA) techniques. Based on general observations that learning processes and behaviors are shaped by the social relationships and influenced by learning environment, the central idea was to understand both the human and non-human interactive effects for a blended learning mode of delivery of computer science modules. Important findings include (a) the importance of non-human nodes to influence the centrality and transfer; (b) the degree of non-human and human connectivity impacts learning. This project reveals that the NS pattern and connectivity as measured by node relationships offer alternative approach for hypothesis generation and design of qualitative data collection. An iterative process further reinforces the analysis, whereas the experimental simulation option itself is an interesting alternative option, a hybrid combination of both experimental simulation and qualitative data collection presents itself as a promising and viable means to study complex scenario such as blended learning delivery mode. The primary value of this paper lies in the design of the approach for studying interactive effects of human (social nodes) and non-human (learning/study environment, Information and Communication Technologies (ICT) infrastructures nodes) components. In conclusion, this project adds to the understanding and the use of SNA to model and study interactive effects in blended social learning.

Keywords: Blended learning, network science, social learning, social network analysis, study environment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 658
735 An Efficient Tool for Mitigating Voltage Unbalance with Reactive Power Control of Distributed Grid-Connected Photovoltaic Systems

Authors: Malinwo Estone Ayikpa

Abstract:

With the rapid increase of grid-connected PV systems over the last decades, genuine challenges have arisen for engineers and professionals of energy field in the planning and operation of existing distribution networks with the integration of new generation sources. However, the conventional distribution network, in its design was not expected to receive other generation outside the main power supply. The tools generally used to analyze the networks become inefficient and cannot take into account all the constraints related to the operation of grid-connected PV systems. Some of these constraints are voltage control difficulty, reverse power flow, and especially voltage unbalance which could be due to the poor distribution of single-phase PV systems in the network. In order to analyze the impact of the connection of small and large number of PV systems to the distribution networks, this paper presents an efficient optimization tool that minimizes voltage unbalance in three-phase distribution networks with active and reactive power injections from the allocation of single-phase and three-phase PV plants. Reactive power can be generated or absorbed using the available capacity and the adjustable power factor of the inverter. Good reduction of voltage unbalance can be achieved by reactive power control of the PV systems. The presented tool is based on the three-phase current injection method and the PV systems are modeled via an equivalent circuit. The primal-dual interior point method is used to obtain the optimal operating points for the systems.

Keywords: Photovoltaic generation, primal-dual interior point method, three-phase optimal power flow, unbalanced system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1088
734 Multi-Agent Systems Applied in the Modeling and Simulation of Biological Problems: A Case Study in Protein Folding

Authors: Pedro Pablo González Pérez, Hiram I. Beltrán, Arturo Rojo-Domínguez, Máximo EduardoSánchez Gutiérrez

Abstract:

Multi-agent system approach has proven to be an effective and appropriate abstraction level to construct whole models of a diversity of biological problems, integrating aspects which can be found both in "micro" and "macro" approaches when modeling this type of phenomena. Taking into account these considerations, this paper presents the important computational characteristics to be gathered into a novel bioinformatics framework built upon a multiagent architecture. The version of the tool presented herein allows studying and exploring complex problems belonging principally to structural biology, such as protein folding. The bioinformatics framework is used as a virtual laboratory to explore a minimalist model of protein folding as a test case. In order to show the laboratory concept of the platform as well as its flexibility and adaptability, we studied the folding of two particular sequences, one of 45-mer and another of 64-mer, both described by an HP model (only hydrophobic and polar residues) and coarse grained 2D-square lattice. According to the discussion section of this piece of work, these two sequences were chosen as breaking points towards the platform, in order to determine the tools to be created or improved in such a way to overcome the needs of a particular computation and analysis of a given tough sequence. The backwards philosophy herein is that the continuous studying of sequences provides itself important points to be added into the platform, to any time improve its efficiency, as is demonstrated herein.

Keywords: multi-agent systems, blackboard-based agent architecture, bioinformatics framework, virtual laboratory, protein folding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2206
733 Discovery of Quantified Hierarchical Production Rules from Large Set of Discovered Rules

Authors: Tamanna Siddiqui, M. Afshar Alam

Abstract:

Automated discovery of Rule is, due to its applicability, one of the most fundamental and important method in KDD. It has been an active research area in the recent past. Hierarchical representation allows us to easily manage the complexity of knowledge, to view the knowledge at different levels of details, and to focus our attention on the interesting aspects only. One of such efficient and easy to understand systems is Hierarchical Production rule (HPRs) system. A HPR, a standard production rule augmented with generality and specificity information, is of the following form: Decision If < condition> Generality Specificity . HPRs systems are capable of handling taxonomical structures inherent in the knowledge about the real world. This paper focuses on the issue of mining Quantified rules with crisp hierarchical structure using Genetic Programming (GP) approach to knowledge discovery. The post-processing scheme presented in this work uses Quantified production rules as initial individuals of GP and discovers hierarchical structure. In proposed approach rules are quantified by using Dempster Shafer theory. Suitable genetic operators are proposed for the suggested encoding. Based on the Subsumption Matrix(SM), an appropriate fitness function is suggested. Finally, Quantified Hierarchical Production Rules (HPRs) are generated from the discovered hierarchy, using Dempster Shafer theory. Experimental results are presented to demonstrate the performance of the proposed algorithm.

Keywords: Knowledge discovery in database, quantification, dempster shafer theory, genetic programming, hierarchy, subsumption matrix.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1527
732 Emotional Intelligence as Predictor of Academic Success among Third Year College Students of PIT

Authors: Sonia Arradaza-Pajaron

Abstract:

College students are expected to engage in an on-the-job training or internship for completion of a course requirement prior to graduation. In this scenario, they are exposed to the real world of work outside their training institution. To find out their readiness both emotionally and academically, this study has been conducted. A descriptive-correlational research design was employed and random sampling technique method was utilized among 265 randomly selected third year college students of PIT, SY 2014-15. A questionnaire on Emotional Intelligence (bearing the four components namely; emotional literacy, emotional quotient competence, values and beliefs and emotional quotient outcomes) was fielded to the respondents and GWA was extracted from the school automate. Data collected were statistically treated using percentage, weighted mean and Pearson-r for correlation.

Results revealed that respondents’ emotional intelligence level is moderately high while their academic performance is good. A high significant relationship was found between the EI component; Emotional Literacy and their academic performance while only significant relationship was found between Emotional Quotient Outcomes and their academic performance. Therefore, if EI influences academic performance significantly when correlated, a possibility that their OJT performance can also be affected either positively or negatively. Thus, EI can be considered predictor of their academic and academic-related performance. Based on the result, it is then recommended that the institution would try to look deeply into the consideration of embedding emotional intelligence as part of the (especially on Emotional Literacy and Emotional Quotient Outcomes of the students) college curriculum. It can be done if the school shall have an effective Emotional Intelligence framework or program manned by qualified and competent teachers, guidance counselors in different colleges in its implementation.

Keywords: Academic performance, emotional intelligence, emotional literacy, emotional quotient competence, emotional quotient outcomes, values and beliefs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1852
731 Dynamic Analysis of Porous Media Using Finite Element Method

Authors: M. Pasbani Khiavi, A. R. M. Gharabaghi, K. Abedi

Abstract:

The mechanical behavior of porous media is governed by the interaction between its solid skeleton and the fluid existing inside its pores. The interaction occurs through the interface of gains and fluid. The traditional analysis methods of porous media, based on the effective stress and Darcy's law, are unable to account for these interactions. For an accurate analysis, the porous media is represented in a fluid-filled porous solid on the basis of the Biot theory of wave propagation in poroelastic media. In Biot formulation, the equations of motion of the soil mixture are coupled with the global mass balance equations to describe the realistic behavior of porous media. Because of irregular geometry, the domain is generally treated as an assemblage of fmite elements. In this investigation, the numerical formulation for the field equations governing the dynamic response of fluid-saturated porous media is analyzed and employed for the study of transient wave motion. A finite element model is developed and implemented into a computer code called DYNAPM for dynamic analysis of porous media. The weighted residual method with 8-node elements is used for developing of a finite element model and the analysis is carried out in the time domain considering the dynamic excitation and gravity loading. Newmark time integration scheme is developed to solve the time-discretized equations which are an unconditionally stable implicit method Finally, some numerical examples are presented to show the accuracy and capability of developed model for a wide variety of behaviors of porous media.

Keywords: Dynamic analysis, Interaction, Porous media, time domain

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1876
730 An Improved Total Variation Regularization Method for Denoising Magnetocardiography

Authors: Yanping Liao, Congcong He, Ruigang Zhao

Abstract:

The application of magnetocardiography signals to detect cardiac electrical function is a new technology developed in recent years. The magnetocardiography signal is detected with Superconducting Quantum Interference Devices (SQUID) and has considerable advantages over electrocardiography (ECG). It is difficult to extract Magnetocardiography (MCG) signal which is buried in the noise, which is a critical issue to be resolved in cardiac monitoring system and MCG applications. In order to remove the severe background noise, the Total Variation (TV) regularization method is proposed to denoise MCG signal. The approach transforms the denoising problem into a minimization optimization problem and the Majorization-minimization algorithm is applied to iteratively solve the minimization problem. However, traditional TV regularization method tends to cause step effect and lacks constraint adaptability. In this paper, an improved TV regularization method for denoising MCG signal is proposed to improve the denoising precision. The improvement of this method is mainly divided into three parts. First, high-order TV is applied to reduce the step effect, and the corresponding second derivative matrix is used to substitute the first order. Then, the positions of the non-zero elements in the second order derivative matrix are determined based on the peak positions that are detected by the detection window. Finally, adaptive constraint parameters are defined to eliminate noises and preserve signal peak characteristics. Theoretical analysis and experimental results show that this algorithm can effectively improve the output signal-to-noise ratio and has superior performance.

Keywords: Constraint parameters, derivative matrix, magnetocardiography, regular term, total variation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 699
729 Neural Network Evaluation of FRP Strengthened RC Buildings Subjected to Near-Fault Ground Motions having Fling Step

Authors: Alireza Mortezaei, Kimia Mortezaei

Abstract:

Recordings from recent earthquakes have provided evidence that ground motions in the near field of a rupturing fault differ from ordinary ground motions, as they can contain a large energy, or “directivity" pulse. This pulse can cause considerable damage during an earthquake, especially to structures with natural periods close to those of the pulse. Failures of modern engineered structures observed within the near-fault region in recent earthquakes have revealed the vulnerability of existing RC buildings against pulse-type ground motions. This may be due to the fact that these modern structures had been designed primarily using the design spectra of available standards, which have been developed using stochastic processes with relatively long duration that characterizes more distant ground motions. Many recently designed and constructed buildings may therefore require strengthening in order to perform well when subjected to near-fault ground motions. Fiber Reinforced Polymers are considered to be a viable alternative, due to their relatively easy and quick installation, low life cycle costs and zero maintenance requirements. The objective of this paper is to investigate the adequacy of Artificial Neural Networks (ANN) to determine the three dimensional dynamic response of FRP strengthened RC buildings under the near-fault ground motions. For this purpose, one ANN model is proposed to estimate the base shear force, base bending moments and roof displacement of buildings in two directions. A training set of 168 and a validation set of 21 buildings are produced from FEA analysis results of the dynamic response of RC buildings under the near-fault earthquakes. It is demonstrated that the neural network based approach is highly successful in determining the response.

Keywords: Seismic evaluation, FRP, neural network, near-fault ground motion

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1739
728 Characterization of Printed Reflectarray Elements on Variable Substrate Thicknesses

Authors: M. Y. Ismail, Arslan Kiyani

Abstract:

Narrow bandwidth and high loss performance limits the use of reflectarray antennas in some applications. This article reports on the feasibility of employing strategic reflectarray resonant elements to characterize the reflectivity performance of reflectarrays in X-band frequency range. Strategic reflectarray resonant elements incorporating variable substrate thicknesses ranging from 0.016λ to 0.052λ have been analyzed in terms of reflection loss and reflection phase performance. The effect of substrate thickness has been validated by using waveguide scattering parameter technique. It has been demonstrated that as the substrate thickness is increased from 0.508mm to 1.57mm the measured reflection loss of dipole element decreased from 5.66dB to 3.70dB with increment in 10% bandwidth of 39MHz to 64MHz. Similarly the measured reflection loss of triangular loop element is decreased from 20.25dB to 7.02dB with an increment in 10% bandwidth of 12MHz to 23MHz. The results also show a significant decrease in the slope of reflection phase curve as well. A Figure of Merit (FoM) has also been defined for the comparison of static phase range of resonant elements under consideration. Moreover, a novel numerical model based on analytical equations has been established incorporating the material properties of dielectric substrate and electrical properties of different reflectarray resonant elements to obtain the progressive phase distribution for each individual reflectarray resonant element.

Keywords: Numerical model, Reflectarray resonant elements, Scattering parameter measurements, Variable substrate thickness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1723
727 Exploring Influence Range of Tainan City Using Electronic Toll Collection Big Data

Authors: Chen Chou, Feng-Tyan Lin

Abstract:

Big Data has been attracted a lot of attentions in many fields for analyzing research issues based on a large number of maternal data. Electronic Toll Collection (ETC) is one of Intelligent Transportation System (ITS) applications in Taiwan, used to record starting point, end point, distance and travel time of vehicle on the national freeway. This study, taking advantage of ETC big data, combined with urban planning theory, attempts to explore various phenomena of inter-city transportation activities. ETC, one of government's open data, is numerous, complete and quick-update. One may recall that living area has been delimited with location, population, area and subjective consciousness. However, these factors cannot appropriately reflect what people’s movement path is in daily life. In this study, the concept of "Living Area" is replaced by "Influence Range" to show dynamic and variation with time and purposes of activities. This study uses data mining with Python and Excel, and visualizes the number of trips with GIS to explore influence range of Tainan city and the purpose of trips, and discuss living area delimited in current. It dialogues between the concepts of "Central Place Theory" and "Living Area", presents the new point of view, integrates the application of big data, urban planning and transportation. The finding will be valuable for resource allocation and land apportionment of spatial planning.

Keywords: Big Data, ITS, influence range, living area, central place theory, visualization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 976
726 Numerical Investigation of Developing Mixed Convection in Isothermal Circular and Annular Sector Ducts

Authors: Ayad A. Abdalla, Elhadi I. Elhadi, Hisham A. Elfergani

Abstract:

Developing mixed convection in circular and annular sector ducts is investigated numerically for steady laminar flow of an incompressible Newtonian fluid with Pr = 0.7 and a wide range of Grashof number (0 £ Gr £ 107). Investigation is limited to the case of heating in circular and annular sector ducts with apex angle of 2ϕ = π/4 for the thermal boundary condition of uniform wall temperature axially and peripherally. A numerical, finite control volume approach based on the SIMPLER algorithm is employed to solve the 3D governing equations. Numerical analysis is conducted using marching technique in the axial direction with axial conduction, axial mass diffusion, and viscous dissipation within the fluid are assumed negligible. The results include developing secondary flow patterns, developing temperature and axial velocity fields, local Nusselt number, local friction factor, and local apparent friction factor. Comparisons are made with the literature and satisfactory agreement is obtained. It is found that free convection enhances the local heat transfer in some cases by up to 2.5 times from predictions which account for forced convection only and the enhancement increases as Grashof number increases. Duct geometry and Grashof number strongly influence the heat transfer and pressure drop characteristics.

Keywords: Mixed convection, annular and circular sector ducts, heat transfer enhancement, pressure drop.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 547
725 The Use of the Limit Cycles of Dynamic Systems for Formation of Program Trajectories of Points Feet of the Anthropomorphous Robot

Authors: A. S. Gorobtsov, A. S. Polyanina, A. E. Andreev

Abstract:

The movement of points feet of the anthropomorphous robot in space occurs along some stable trajectory of a known form. A large number of modifications to the methods of control of biped robots indicate the fundamental complexity of the problem of stability of the program trajectory and, consequently, the stability of the control for the deviation for this trajectory. Existing gait generators use piecewise interpolation of program trajectories. This leads to jumps in the acceleration at the boundaries of sites. Another interpolation can be realized using differential equations with fractional derivatives. In work, the approach to synthesis of generators of program trajectories is considered. The resulting system of nonlinear differential equations describes a smooth trajectory of movement having rectilinear sites. The method is based on the theory of an asymptotic stability of invariant sets. The stability of such systems in the area of localization of oscillatory processes is investigated. The boundary of the area is a bounded closed surface. In the corresponding subspaces of the oscillatory circuits, the resulting stable limit cycles are curves having rectilinear sites. The solution of the problem is carried out by means of synthesis of a set of the continuous smooth controls with feedback. The necessary geometry of closed trajectories of movement is obtained due to the introduction of high-order nonlinearities in the control of stabilization systems. The offered method was used for the generation of trajectories of movement of point’s feet of the anthropomorphous robot. The synthesis of the robot's program movement was carried out by means of the inverse method.

Keywords: Control, limits cycle, robot, stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 768
724 Detecting Fake News: A Natural Language Processing, Reinforcement Learning, and Blockchain Approach

Authors: Ashly Joseph, Jithu Paulose

Abstract:

In an era where misleading information may quickly circulate on digital news channels, it is crucial to have efficient and trustworthy methods to detect and reduce the impact of misinformation. This research proposes an innovative framework that combines Natural Language Processing (NLP), Reinforcement Learning (RL), and Blockchain technologies to precisely detect and minimize the spread of false information in news articles on social media. The framework starts by gathering a variety of news items from different social media sites and performing preprocessing on the data to ensure its quality and uniformity. NLP methods are utilized to extract complete linguistic and semantic characteristics, effectively capturing the subtleties and contextual aspects of the language used. These features are utilized as input for a RL model. This model acquires the most effective tactics for detecting and mitigating the impact of false material by modeling the intricate dynamics of user engagements and incentives on social media platforms. The integration of blockchain technology establishes a decentralized and transparent method for storing and verifying the accuracy of information. The Blockchain component guarantees the unchangeability and safety of verified news records, while encouraging user engagement for detecting and fighting false information through an incentive system based on tokens. The suggested framework seeks to provide a thorough and resilient solution to the problems presented by misinformation in social media articles.

Keywords: Natural Language Processing, Reinforcement Learning, Blockchain, fake news mitigation, misinformation detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 88
723 Minimizing Grid Reliance: A Power Model Approach for Peak Hour Demand Based on Hybrid Solar Systems

Authors: Almutasim Billa A. Alanazi, Hal S. Tharp

Abstract:

Electrical energy demands have increased due to population growth and the variety of new electrical load technologies. This increase demand has nearly doubled during peak hours. Consequently, that necessitates the construction of new power plant infrastructures, which is a costly approach due to the expense of construction building, future preservation like maintenance, and environmental impact. As an alternative approach, most electrical utilities increase the price of electrical usage during peak hours, encouraging consumers to use less electricity during peak periods under Time-Of-Use programs, which may not be universally suitable for all consumers. Furthermore, in some areas, the excessive demand and the lack of supply cause an electrical outage, posing considerable stress and challenges to electrical utilities and consumers. However, control systems, artificial intelligence (AI), and renewable energy (RE), when effectively integrated, provide new solutions to mitigate excessive demand during peak hours. This paper presents a power model that reduces the reliance on the power grid during peak hours by utilizing a hybrid solar system connected to a residential house with a power management controller, that prioritizes the power drives between Photovoltaic (PV) production, battery backup, and the utility electrical grid. As a result, dependence on utility grid was from 3% to 18% during peak hours, improving energy stability safely and efficiently for electrical utilities, consumers, and communities, providing a viable alternative to conventional approaches such as Time-Of-Use programs.

Keywords: Artificial intelligence, AI, control system, photovoltaic, PV, renewable energy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 129
722 Fractal Dimension of Breast Cancer Cell Migration in a Wound Healing Assay

Authors: R. Sullivan, T. Holden, G. Tremberger, Jr, E. Cheung, C. Branch, J. Burrero, G. Surpris, S. Quintana, A. Rameau, N. Gadura, H. Yao, R. Subramaniam, P. Schneider, S. A. Rotenberg, P. Marchese, A. Flamhlolz, D. Lieberman, T. Cheung

Abstract:

Migration in breast cancer cell wound healing assay had been studied using image fractal dimension analysis. The migration of MDA-MB-231 cells (highly motile) in a wound healing assay was captured using time-lapse phase contrast video microscopy and compared to MDA-MB-468 cell migration (moderately motile). The Higuchi fractal method was used to compute the fractal dimension of the image intensity fluctuation along a single pixel width region parallel to the wound. The near-wound region fractal dimension was found to decrease three times faster in the MDA-MB- 231 cells initially as compared to the less cancerous MDA-MB-468 cells. The inner region fractal dimension was found to be fairly constant for both cell types in time and suggests a wound influence range of about 15 cell layer. The box-counting fractal dimension method was also used to study region of interest (ROI). The MDAMB- 468 ROI area fractal dimension was found to decrease continuously up to 7 hours. The MDA-MB-231 ROI area fractal dimension was found to increase and is consistent with the behavior of a HGF-treated MDA-MB-231 wound healing assay posted in the public domain. A fractal dimension based capacity index has been formulated to quantify the invasiveness of the MDA-MB-231 cells in the perpendicular-to-wound direction. Our results suggest that image intensity fluctuation fractal dimension analysis can be used as a tool to quantify cell migration in terms of cancer severity and treatment responses.

Keywords: Higuchi fractal dimension, box-counting fractal dimension, cancer cell migration, wound healing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2545
721 Simplified Stress Gradient Method for Stress-Intensity Factor Determination

Authors: Jeries J. Abou-Hanna

Abstract:

Several techniques exist for determining stress-intensity factors in linear elastic fracture mechanics analysis. These techniques are based on analytical, numerical, and empirical approaches that have been well documented in literature and engineering handbooks. However, not all techniques share the same merit. In addition to overly-conservative results, the numerical methods that require extensive computational effort, and those requiring copious user parameters hinder practicing engineers from efficiently evaluating stress-intensity factors. This paper investigates the prospects of reducing the complexity and required variables to determine stress-intensity factors through the utilization of the stress gradient and a weighting function. The heart of this work resides in the understanding that fracture emanating from stress concentration locations cannot be explained by a single maximum stress value approach, but requires use of a critical volume in which the crack exists. In order to understand the effectiveness of this technique, this study investigated components of different notch geometry and varying levels of stress gradients. Two forms of weighting functions were employed to determine stress-intensity factors and results were compared to analytical exact methods. The results indicated that the “exponential” weighting function was superior to the “absolute” weighting function. An error band +/- 10% was met for cases ranging from a steep stress gradient in a sharp v-notch to the less severe stress transitions of a large circular notch. The incorporation of the proposed method has shown to be a worthwhile consideration.

Keywords: Fracture mechanics, finite element method, stress intensity factor, stress gradient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 766
720 Limited Component Evaluation of the Effect of Regular Cavities on the Sheet Metal Element of the Steel Plate Shear Wall

Authors: Seyyed Abbas Mojtabavi, Mojtaba Fatzaneh Moghadam, Masoud Mahdavi

Abstract:

Steel Metal Shear Wall is one of the most common and widely used energy dissipation systems in structures, which is used today as a damping system due to the increase in the construction of metal structures. In the present study, the shear wall of the steel plate with dimensions of 5×3 m and thickness of 0.024 m was modeled with 2 floors of total height from the base level with finite element method in Abaqus software. The loading is done as a concentrated load at the upper point of the shear wall on the second floor based on step type buckle. The mesh in the model is applied in two directions of length and width of the shear wall, equal to 0.02 and 0.033, respectively, and the mesh in the models is of sweep type. Finally, it was found that the steel plate shear wall with cavity (CSPSW) compared to the SPSW model, S (Mises), Smax (In-Plane Principal), Smax (In-Plane Principal-ABS), Smax (Min Principal) increased by 53%, 70%, 68% and 43%, respectively. The presence of cavities has led to an increase in the estimated stresses, but their presence has caused critical stresses and critical deformations created to be removed from the inner surface of the shear wall and transferred to the desired sections (regular cavities) which can be suggested as a solution in seismic design and improvement of the structure to transfer possible damage during the earthquake and storm to the desired and pre-designed location in the structure.

Keywords: Steel plate shear wall, Abacus software, finite element method, boundary element, seismic structural improvement, Von misses Stress.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 519
719 Issues in Spectral Source Separation Techniques for Plant-wide Oscillation Detection and Diagnosis

Authors: A.K. Tangirala, S. Babji

Abstract:

In the last few years, three multivariate spectral analysis techniques namely, Principal Component Analysis (PCA), Independent Component Analysis (ICA) and Non-negative Matrix Factorization (NMF) have emerged as effective tools for oscillation detection and isolation. While the first method is used in determining the number of oscillatory sources, the latter two methods are used to identify source signatures by formulating the detection problem as a source identification problem in the spectral domain. In this paper, we present a critical drawback of the underlying linear (mixing) model which strongly limits the ability of the associated source separation methods to determine the number of sources and/or identify the physical source signatures. It is shown that the assumed mixing model is only valid if each unit of the process gives equal weighting (all-pass filter) to all oscillatory components in its inputs. This is in contrast to the fact that each unit, in general, acts as a filter with non-uniform frequency response. Thus, the model can only facilitate correct identification of a source with a single frequency component, which is again unrealistic. To overcome this deficiency, an iterative post-processing algorithm that correctly identifies the physical source(s) is developed. An additional issue with the existing methods is that they lack a procedure to pre-screen non-oscillatory/noisy measurements which obscure the identification of oscillatory sources. In this regard, a pre-screening procedure is prescribed based on the notion of sparseness index to eliminate the noisy and non-oscillatory measurements from the data set used for analysis.

Keywords: non-negative matrix factorization, PCA, source separation, plant-wide diagnosis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1535
718 Development of Electrospun Membranes with Defined Polyethylene Collagen and Oxide Architectures Reinforced with Medium and High Intensity Statins

Authors: S. Jaramillo, Y. Montoya, W. Agudelo, J. Bustamante

Abstract:

Cardiovascular diseases (CVD) are related to affectations of the heart and blood vessels, within these are pathologies such as coronary or peripheral heart disease, caused by the narrowing of the vessel wall (atherosclerosis), which is related to the accumulation of Low-Density Lipoproteins (LDL) in the arterial walls that leads to a progressive reduction of the lumen of the vessel and alterations in blood perfusion. Currently, the main therapeutic strategy for this type of alteration is drug treatment with statins, which inhibit the enzyme 3-hydroxy-3-methyl-glutaryl-CoA reductase (HMG-CoA reductase), responsible for modulating the rate of cholesterol production and other isoprenoids in the mevalonate pathway. This enzyme induces the expression of LDL receptors in the liver, increasing their number on the surface of liver cells, reducing the plasma concentration of cholesterol. On the other hand, when the blood vessel presents stenosis, a surgical procedure with vascular implants is indicated, which are used to restore circulation in the arterial or venous bed. Among the materials used for the development of vascular implants are Dacron® and Teflon®, which perform the function of re-waterproofing the circulatory circuit, but due to their low biocompatibility, they do not have the ability to promote remodeling and tissue regeneration processes. Based on this, the present research proposes the development of a hydrolyzed collagen and polyethylene oxide electrospun membrane reinforced with medium and high-intensity statins, so that in future research it can favor tissue remodeling processes from its microarchitecture.

Keywords: atherosclerosis, medium and high-intensity statins, microarchitecture, electrospun membrane

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 647
717 Windphil Poetic in Architecture: Energy Efficient Strategies in Modern Buildings of Iran

Authors: Sepideh Samadzadehyazdi, Mohammad Javad Khalili, Sarvenaz Samadzadehyazdi, Mohammad Javad Mahdavinejad

Abstract:

The term ‘Windphil Architecture’ refers to the building that facilitates natural ventilation by architectural elements. Natural ventilation uses the natural forces of wind pressure and stacks effect to direct the movement of air through buildings. Natural ventilation is increasingly being used in contemporary buildings to minimize the consumption of non-renewable energy and it is an effective way to improve indoor air quality. The main objective of this paper is to identify the strategies of using natural ventilation in Iranian modern buildings. In this regard, the research method is ‘descriptive-analytical’ that is based on comparative techniques. To simulate wind flow in the interior spaces of case studies, FLUENT software has been used. Research achievements show that it is possible to use natural ventilation to create a thermally comfortable indoor environment. The natural ventilation strategies could be classified into two groups of environmental characteristics such as public space structure, and architectural characteristics including building form and orientation, openings, central courtyards, wind catchers, roof, wall wings, semi-open spaces and the heat capacity of materials. Having investigated modern buildings of Iran, innovative elements like wind catchers and wall wings are less used than the traditional architecture. Instead, passive ventilation strategies have been more considered in the building design as for the roof structure and openings.

Keywords: Natural ventilation strategies, wind catchers, wind flow, Iranian modern buildings.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1048
716 Efficient Design Optimization of Multi-State Flow Network for Multiple Commodities

Authors: Yu-Cheng Chou, Po Ting Lin

Abstract:

The network of delivering commodities has been an important design problem in our daily lives and many transportation applications. The delivery performance is evaluated based on the system reliability of delivering commodities from a source node to a sink node in the network. The system reliability is thus maximized to find the optimal routing. However, the design problem is not simple because (1) each path segment has randomly distributed attributes; (2) there are multiple commodities that consume various path capacities; (3) the optimal routing must successfully complete the delivery process within the allowable time constraints. In this paper, we want to focus on the design optimization of the Multi-State Flow Network (MSFN) for multiple commodities. We propose an efficient approach to evaluate the system reliability in the MSFN with respect to randomly distributed path attributes and find the optimal routing subject to the allowable time constraints. The delivery rates, also known as delivery currents, of the path segments are evaluated and the minimal-current arcs are eliminated to reduce the complexity of the MSFN. Accordingly, the correct optimal routing is found and the worst-case reliability is evaluated. It has been shown that the reliability of the optimal routing is at least higher than worst-case measure. Two benchmark examples are utilized to demonstrate the proposed method. The comparisons between the original and the reduced networks show that the proposed method is very efficient.

Keywords: Multiple Commodities, Multi-State Flow Network (MSFN), Time Constraints, Worst-Case Reliability (WCR)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1450