Search results for: capacity spectrum method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9333

Search results for: capacity spectrum method

423 A Preliminary Analysis of Sustainable Development in the Belgrade Metropolitan Area

Authors: S. Zeković, M. Vujošević, T. Maričić

Abstract:

The paper provides a comprehensive analysis of the sustainable development in the Belgrade Metropolitan Region - BMA (level NUTS 2) preliminary evaluating the three chosen components: 1) economic growth and developmental changes; 2) competitiveness; and 3) territorial concentration and industrial specialization. First, we identified the main results of development changes and economic growth by applying Shift-share analysis on the metropolitan level. Second, the empirical evaluation of competitiveness in the BMA is based on the analysis of absolute and relative values of eight indicators by Spider method. Paper shows that the consideration of the national share, industrial mix and metropolitan/regional share in total Shift share of the BMA, as well as economic/functional specialization of the BMA indicate very strong process of deindustrialization. Allocative component of the BMA economic growth has positive value, reflecting the above-average sector productivity compared to the national average. Third, the important positive role of metropolitan/regional component in decomposition of the BMA economic growth is highlighted as one of the key results. Finally, comparative analysis of the industrial territorial concentration in the BMA in relation to Serbia is based on location quotient (LQ) or Balassa index as a valid measure. The results indicate absolute and relative differences in decrease of industry territorial concentration as well as inefficiency of utilizing territorial capital in the BMA. Results are important for the increase of regional competitiveness and territorial distribution in this area as well as for improvement of sustainable metropolitan and sector policies, planning and governance on this level.

Keywords: Belgrade Metropolitan Area (BMA), Comprehensive analysis/evaluation, economic growth and competitiveness, sustainable development.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1751
422 Pre and Post IFRS Loss Avoidance in France and the United Kingdom

Authors: T. Miková

Abstract:

This paper analyzes the effect of a single uniform accounting rule on reporting quality by investigating the influence of IFRS on earnings management. This paper examines whether earnings management is reduced after IFRS adoption through the use of “loss avoidance thresholds”, a method that has been verified in earlier studies. This paper concentrates on two European countries: one that represents the continental code law tradition with weak protection of investors (France) and one that represents the Anglo-American common law tradition, which typically implies a strong enforcement system (the United Kingdom).

The research investigates a sample of 526 companies (6822 firm-year observations) during the years 2000 – 2013. The results are different for the two jurisdictions. This study demonstrates that a single set of accounting standards contributes to better reporting quality and reduces the pervasiveness of earnings management in France. In contrast, there is no evidence that a reduction in earnings management followed the implementation of IFRS in the United Kingdom. Due to the fact that IFRS benefit France but not the United Kingdom, other political and economic factors, such legal system or capital market strength, must play a significant role in influencing the comparability and transparency cross-border companies’ financial statements. Overall, the result suggests that IFRS moderately contribute to the accounting quality of reported financial statements and bring benefit for stakeholders, though the role played by other economic factors cannot be discounted.

Keywords: Accounting Standards, Earnings Management, International Financial Reporting Standards, Loss Avoidance, Reporting Quality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3127
421 Numerical Simulation of Free Surface Water Wave for the Flow around NACA 0012 Hydrofoil and Wigley Hull Using VOF Method

Authors: Saadia Adjali, Omar Imine, Mohammed Aounallah, Mustapha Belkadi

Abstract:

Steady three-dimensional and two free surface waves generated by moving bodies are presented, the flow problem to be simulated is rich in complexity and poses many modeling challenges because of the existence of breaking waves around the ship hull, and because of the interaction of the two-phase flow with the turbulent boundary layer. The results of several simulations are reported. The first study was performed for NACA0012 of hydrofoil with different meshes, this section is analyzed at h/c= 1, 0345 for 2D. In the second simulation a mathematically defined Wigley hull form is used to investigate the application of a commercial CFD code in prediction of the total resistance and its components from tangential and normal forces on the hull wetted surface. The computed resistance and wave profiles are used to estimate the coefficient of the total resistance for Wigley hull advancing in calm water under steady conditions. The commercial CFD software FLUENT version 12 is used for the computations in the present study. The calculated grid is established using the code computer GAMBIT 2.3.26. The shear stress k-ωSST model is used for turbulence modeling and the volume of fluid technique is employed to simulate the free-surface motion. The second order upwind scheme is used for discretizing the convection terms in the momentum transport equations, the Modified HRIC scheme for VOF discretization. The results obtained compare well with the experimental data.

Keywords: Free surface flows, Breaking waves, Boundary layer, Wigley hull, Volume of fluid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3562
420 Numerical Simulation of Free Surface Water Wave for the Flow around NACA 0012 Hydrofoil and Wigley Hull Using VOF Method

Authors: Saadia Adjali, Omar Imine, Mohammed Aounallah, Mustapha Belkadi

Abstract:

Steady three-dimensional and two free surface waves generated by moving bodies are presented, the flow problem to be simulated is rich in complexity and poses many modeling challenges because of the existence of breaking waves around the ship hull, and because of the interaction of the two-phase flow with the turbulent boundary layer. The results of several simulations are reported. The first study was performed for NACA0012 of hydrofoil with different meshes, this section is analyzed at h/c= 1, 0345 for 2D. In the second simulation a mathematically defined Wigley hull form is used to investigate the application of a commercial CFD code in prediction of the total resistance and its components from tangential and normal forces on the hull wetted surface. The computed resistance and wave profiles are used to estimate the coefficient of the total resistance for Wigley hull advancing in calm water under steady conditions. The commercial CFD software FLUENT version 12 is used for the computations in the present study. The calculated grid is established using the code computer GAMBIT 2.3.26. The shear stress k-ωSST model is used for turbulence modeling and the volume of fluid technique is employed to simulate the free-surface motion. The second order upwind scheme is used for discretizing the convection terms in the momentum transport equations, the Modified HRIC scheme for VOF discretization. The results obtained compare well with the experimental data.

Keywords: Free surface flows, breaking waves, boundary layer, Wigley hull, volume of fluid.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3302
419 Software Product Quality Evaluation Model with Multiple Criteria Decision Making Analysis

Authors: C. Ardil

Abstract:

This paper presents a software product quality evaluation model based on the ISO/IEC 25010 quality model. The evaluation characteristics and sub characteristics were identified from the ISO/IEC 25010 quality model. The multidimensional structure of the quality model is based on characteristics such as functional suitability, performance efficiency, compatibility, usability, reliability, security, maintainability, and portability, and associated sub characteristics. Random numbers are generated to establish the decision maker’s importance weights for each sub characteristics. Also, random numbers are generated to establish the decision matrix of the decision maker’s final scores for each software product against each sub characteristics. Thus, objective criteria importance weights and index scores for datasets were obtained from the random numbers. In the proposed model, five different software product quality evaluation datasets under three different weight vectors were applied to multiple criteria decision analysis method, preference analysis for reference ideal solution (PARIS) for comparison, and sensitivity analysis procedure. This study contributes to provide a better understanding of the application of MCDMA methods and ISO/IEC 25010 quality model guidelines in software product quality evaluation process.

Keywords: ISO/IEC 25010 quality model, multiple criteria decisions making, multiple criteria decision making analysis, MCDMA, PARIS, Software Product Quality Evaluation Model, Software Product Quality Evaluation, Software Evaluation, Software Selection, Software

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 448
418 Discovery of Quantified Hierarchical Production Rules from Large Set of Discovered Rules

Authors: Tamanna Siddiqui, M. Afshar Alam

Abstract:

Automated discovery of Rule is, due to its applicability, one of the most fundamental and important method in KDD. It has been an active research area in the recent past. Hierarchical representation allows us to easily manage the complexity of knowledge, to view the knowledge at different levels of details, and to focus our attention on the interesting aspects only. One of such efficient and easy to understand systems is Hierarchical Production rule (HPRs) system. A HPR, a standard production rule augmented with generality and specificity information, is of the following form: Decision If < condition> Generality Specificity . HPRs systems are capable of handling taxonomical structures inherent in the knowledge about the real world. This paper focuses on the issue of mining Quantified rules with crisp hierarchical structure using Genetic Programming (GP) approach to knowledge discovery. The post-processing scheme presented in this work uses Quantified production rules as initial individuals of GP and discovers hierarchical structure. In proposed approach rules are quantified by using Dempster Shafer theory. Suitable genetic operators are proposed for the suggested encoding. Based on the Subsumption Matrix(SM), an appropriate fitness function is suggested. Finally, Quantified Hierarchical Production Rules (HPRs) are generated from the discovered hierarchy, using Dempster Shafer theory. Experimental results are presented to demonstrate the performance of the proposed algorithm.

Keywords: Knowledge discovery in database, quantification, dempster shafer theory, genetic programming, hierarchy, subsumption matrix.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1527
417 Emotional Intelligence as Predictor of Academic Success among Third Year College Students of PIT

Authors: Sonia Arradaza-Pajaron

Abstract:

College students are expected to engage in an on-the-job training or internship for completion of a course requirement prior to graduation. In this scenario, they are exposed to the real world of work outside their training institution. To find out their readiness both emotionally and academically, this study has been conducted. A descriptive-correlational research design was employed and random sampling technique method was utilized among 265 randomly selected third year college students of PIT, SY 2014-15. A questionnaire on Emotional Intelligence (bearing the four components namely; emotional literacy, emotional quotient competence, values and beliefs and emotional quotient outcomes) was fielded to the respondents and GWA was extracted from the school automate. Data collected were statistically treated using percentage, weighted mean and Pearson-r for correlation.

Results revealed that respondents’ emotional intelligence level is moderately high while their academic performance is good. A high significant relationship was found between the EI component; Emotional Literacy and their academic performance while only significant relationship was found between Emotional Quotient Outcomes and their academic performance. Therefore, if EI influences academic performance significantly when correlated, a possibility that their OJT performance can also be affected either positively or negatively. Thus, EI can be considered predictor of their academic and academic-related performance. Based on the result, it is then recommended that the institution would try to look deeply into the consideration of embedding emotional intelligence as part of the (especially on Emotional Literacy and Emotional Quotient Outcomes of the students) college curriculum. It can be done if the school shall have an effective Emotional Intelligence framework or program manned by qualified and competent teachers, guidance counselors in different colleges in its implementation.

Keywords: Academic performance, emotional intelligence, emotional literacy, emotional quotient competence, emotional quotient outcomes, values and beliefs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1852
416 Occurrence of Foreign Matter in Food: Applied Identification Method - Association of Official Agricultural Chemists (AOAC) and Food and Drug Administration (FDA)

Authors: E. C. Mattos, V. S. M. G. Daros, R. Dal Col, A. L. Nascimento

Abstract:

The aim of this study is to present the results of a retrospective survey on the foreign matter found in foods analyzed at the Adolfo Lutz Institute, from July 2001 to July 2015. All the analyses were conducted according to the official methods described on Association of Official Agricultural Chemists (AOAC) for the micro analytical procedures and Food and Drug Administration (FDA) for the macro analytical procedures. The results showed flours, cereals and derivatives such as baking and pasta products were the types of food where foreign matters were found more frequently followed by condiments and teas. Fragments of stored grains insects, its larvae, nets, excrement, dead mites and rodent excrement were the most foreign matter found in food. Besides, foreign matters that can cause a physical risk to the consumer’s health such as metal, stones, glass, wood were found but rarely. Miscellaneous (shell, sand, dirt and seeds) were also reported. There are a lot of extraneous materials that are considered unavoidable since are something inherent to the product itself, such as insect fragments in grains. In contrast, there are avoidable extraneous materials that are less tolerated because it is preventable with the Good Manufacturing Practice. The conclusion of this work is that although most extraneous materials found in food are considered unavoidable it is necessary to keep the Good Manufacturing Practice throughout the food processing as well as maintaining a constant surveillance of the production process in order to avoid accidents that may lead to occurrence of these extraneous materials in food.

Keywords: Food contamination, extraneous materials, foreign matter, surveillance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3702
415 Flow Acoustics in Solid-Fluid Structures

Authors: Morten Willatzen, Mikhail Vladimirovich Deryabin

Abstract:

The governing two-dimensional equations of a heterogeneous material composed of a fluid (allowed to flow in the absence of acoustic excitations) and a crystalline piezoelectric cubic solid stacked one-dimensionally (along the z direction) are derived and special emphasis is given to the discussion of acoustic group velocity for the structure as a function of the wavenumber component perpendicular to the stacking direction (being the x axis). Variations in physical parameters with y are neglected assuming infinite material homogeneity along the y direction and the flow velocity is assumed to be directed along the x direction. In the first part of the paper, the governing set of differential equations are derived as well as the imposed boundary conditions. Solutions are provided using Hamilton-s equations for the wavenumber vs. frequency as a function of the number and thickness of solid layers and fluid layers in cases with and without flow (also the case of a position-dependent flow in the fluid layer is considered). In the first part of the paper, emphasis is given to the small-frequency case. Boundary conditions at the bottom and top parts of the full structure are left unspecified in the general solution but examples are provided for the case where these are subject to rigid-wall conditions (Neumann boundary conditions in the acoustic pressure). In the second part of the paper, emphasis is given to the general case of larger frequencies and wavenumber-frequency bandstructure formation. A wavenumber condition for an arbitrary set of consecutive solid and fluid layers, involving four propagating waves in each solid region, is obtained again using the monodromy matrix method. Case examples are finally discussed.

Keywords: Flow, acoustics, solid-fluid structures, periodicity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1588
414 The Effect of CPU Location in Total Immersion of Microelectronics

Authors: A. Almaneea, N. Kapur, J. L. Summers, H. M. Thompson

Abstract:

Meeting the growth in demand for digital services such as social media, telecommunications, and business and cloud services requires large scale data centres, which has led to an increase in their end use energy demand. Generally, over 30% of data centre power is consumed by the necessary cooling overhead. Thus energy can be reduced by improving the cooling efficiency. Air and liquid can both be used as cooling media for the data centre. Traditional data centre cooling systems use air, however liquid is recognised as a promising method that can handle the more densely packed data centres. Liquid cooling can be classified into three methods; rack heat exchanger, on-chip heat exchanger and full immersion of the microelectronics. This study quantifies the improvements of heat transfer specifically for the case of immersed microelectronics by varying the CPU and heat sink location. Immersion of the server is achieved by filling the gap between the microelectronics and a water jacket with a dielectric liquid which convects the heat from the CPU to the water jacket on the opposite side. Heat transfer is governed by two physical mechanisms, which is natural convection for the fixed enclosure filled with dielectric liquid and forced convection for the water that is pumped through the water jacket. The model in this study is validated with published numerical and experimental work and shows good agreement with previous work. The results show that the heat transfer performance and Nusselt number (Nu) is improved by 89% by placing the CPU and heat sink on the bottom of the microelectronics enclosure.

Keywords: CPU location, data centre cooling, heat sink in enclosures, Immersed microelectronics, turbulent natural convection in enclosures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2173
413 Participation in Co-Curricular Activities of Undergraduate Nursing Students Attending the Leadership Promoting Program Based on Self-Directed Learning Approach

Authors: Porntipa Taksin, Jutamas Wongchan, Amornrat Karamee

Abstract:

The researchers’ experience of student affairs in 2011-2013, we found that few undergraduate nursing students become student association members who participated in co-curricular activities, they have limited skill of self-directed-learning and leadership. We developed “A Leadership Promoting Program” using Self-Directed Learning concept. The program included six activities: 1) Breaking the ice, Decoding time, Creative SMO, Know me-Understand you, Positive thinking, and Creative dialogue, which include four aspects of these activities: decision-making, implementation, benefits, and evaluation. The one-group, pretest-posttest quasi-experimental research was designed to examine the effects of the program on participation in co-curricular activities. Thirty five students participated in the program. All were members of the board of undergraduate nursing student association of Boromarajonani College of Nursing, Chonburi. All subjects completed the questionnaire about participation in the activities at beginning and at the end of the program. Data were analyzed using descriptive statistics and dependent t-test. The results showed that the posttest scores of all four aspects mean were significantly higher than the pretest scores (t=3.30, p<.01). Three aspects had high mean scores, Benefits (Mean = 3.24, S.D. = 0.83), Decision-making (Mean = 3.21, S.D. = 0.59), and Implementation (Mean=3.06, S.D.=0.52). However, scores on evaluation falls in moderate scale (Mean = 2.68, S.D. = 1.13). Therefore, the Leadership Promoting Program based on Self-Directed Learning Approach could be a method to improve students’ participation in co-curricular activities and leadership.

Keywords: Participation in co-curricular activities, undergraduate nursing students, leadership promoting program, self-directed learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1484
412 Methodology for Quantifying the Meaning of Information in Biological Systems

Authors: Richard L. Summers

Abstract:

The advanced computational analysis of biological systems is becoming increasingly dependent upon an understanding of the information-theoretic structure of the materials, energy and interactive processes that comprise those systems. The stability and survival of these living systems is fundamentally contingent upon their ability to acquire and process the meaning of information concerning the physical state of its biological continuum (biocontinuum). The drive for adaptive system reconciliation of a divergence from steady state within this biocontinuum can be described by an information metric-based formulation of the process for actionable knowledge acquisition that incorporates the axiomatic inference of Kullback-Leibler information minimization driven by survival replicator dynamics. If the mathematical expression of this process is the Lagrangian integrand for any change within the biocontinuum then it can also be considered as an action functional for the living system. In the direct method of Lyapunov, such a summarizing mathematical formulation of global system behavior based on the driving forces of energy currents and constraints within the system can serve as a platform for the analysis of stability. As the system evolves in time in response to biocontinuum perturbations, the summarizing function then conveys information about its overall stability. This stability information portends survival and therefore has absolute existential meaning for the living system. The first derivative of the Lyapunov energy information function will have a negative trajectory toward a system steady state if the driving force is dissipating. By contrast, system instability leading to system dissolution will have a positive trajectory. The direction and magnitude of the vector for the trajectory then serves as a quantifiable signature of the meaning associated with the living system’s stability information, homeostasis and survival potential.

Keywords: Semiotic meaning, Shannon information, Lyapunov, living systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 515
411 Detecting Fake News: A Natural Language Processing, Reinforcement Learning, and Blockchain Approach

Authors: Ashly Joseph, Jithu Paulose

Abstract:

In an era where misleading information may quickly circulate on digital news channels, it is crucial to have efficient and trustworthy methods to detect and reduce the impact of misinformation. This research proposes an innovative framework that combines Natural Language Processing (NLP), Reinforcement Learning (RL), and Blockchain technologies to precisely detect and minimize the spread of false information in news articles on social media. The framework starts by gathering a variety of news items from different social media sites and performing preprocessing on the data to ensure its quality and uniformity. NLP methods are utilized to extract complete linguistic and semantic characteristics, effectively capturing the subtleties and contextual aspects of the language used. These features are utilized as input for a RL model. This model acquires the most effective tactics for detecting and mitigating the impact of false material by modeling the intricate dynamics of user engagements and incentives on social media platforms. The integration of blockchain technology establishes a decentralized and transparent method for storing and verifying the accuracy of information. The Blockchain component guarantees the unchangeability and safety of verified news records, while encouraging user engagement for detecting and fighting false information through an incentive system based on tokens. The suggested framework seeks to provide a thorough and resilient solution to the problems presented by misinformation in social media articles.

Keywords: Natural Language Processing, Reinforcement Learning, Blockchain, fake news mitigation, misinformation detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 87
410 Issues in Spectral Source Separation Techniques for Plant-wide Oscillation Detection and Diagnosis

Authors: A.K. Tangirala, S. Babji

Abstract:

In the last few years, three multivariate spectral analysis techniques namely, Principal Component Analysis (PCA), Independent Component Analysis (ICA) and Non-negative Matrix Factorization (NMF) have emerged as effective tools for oscillation detection and isolation. While the first method is used in determining the number of oscillatory sources, the latter two methods are used to identify source signatures by formulating the detection problem as a source identification problem in the spectral domain. In this paper, we present a critical drawback of the underlying linear (mixing) model which strongly limits the ability of the associated source separation methods to determine the number of sources and/or identify the physical source signatures. It is shown that the assumed mixing model is only valid if each unit of the process gives equal weighting (all-pass filter) to all oscillatory components in its inputs. This is in contrast to the fact that each unit, in general, acts as a filter with non-uniform frequency response. Thus, the model can only facilitate correct identification of a source with a single frequency component, which is again unrealistic. To overcome this deficiency, an iterative post-processing algorithm that correctly identifies the physical source(s) is developed. An additional issue with the existing methods is that they lack a procedure to pre-screen non-oscillatory/noisy measurements which obscure the identification of oscillatory sources. In this regard, a pre-screening procedure is prescribed based on the notion of sparseness index to eliminate the noisy and non-oscillatory measurements from the data set used for analysis.

Keywords: non-negative matrix factorization, PCA, source separation, plant-wide diagnosis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1534
409 Analyzing the Shearing-Layer Concept Applied to Urban Green System

Authors: S. Pushkar, O. Verbitsky

Abstract:

Currently, green rating systems are mainly utilized for correctly sizing mechanical and electrical systems, which have short lifetime expectancies. In these systems, passive solar and bio-climatic architecture, which have long lifetime expectancies, are neglected. Urban rating systems consider buildings and services in addition to neighborhoods and public transportation as integral parts of the built environment. The main goal of this study was to develop a more consistent point allocation system for urban building standards by using six different lifetime shearing layers: Site, Structure, Skin, Services, Space, and Stuff, each reflecting distinct environmental damages. This shearing-layer concept was applied to internationally well-known rating systems: Leadership in Energy and Environmental Design (LEED) for Neighborhood Development, BRE Environmental Assessment Method (BREEAM) for Communities and Comprehensive Assessment System for Building Environmental Efficiency (CASBEE) for Urban Development. The results showed that LEED for Neighborhood Development and BREEAM for Communities focused on long-lifetime-expectancy building designs, whereas CASBEE for Urban Development gave equal importance to the Building and Service Layers. Moreover, although this rating system was applied using a building-scale assessment, “Urban Area + Buildings” focuses on a short-lifetime-expectancy system design, neglecting to improve the architectural design by considering bioclimatic and passive solar aspects.

Keywords: Green rating system, passive solar architecture, shearing-layer concept, urban community.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1976
408 The Effects of North Sea Caspian Pattern Index on the Temperature and Precipitation Regime in the Aegean Region of Turkey

Authors: Cenk Sezen, Turgay Partal

Abstract:

North Sea Caspian Pattern Index (NCP) refers to an atmospheric teleconnection between the North Sea and North Caspian at the 500 hPa geopotential height level. The aim of this study is to search for effects of NCP on annual and seasonal mean temperature and also annual and seasonal precipitation totals in the Aegean region of Turkey. The study contains the data that consist of 46 years obtained from nine meteorological stations. To determine the relationship between NCP and the climatic parameters, firstly the Pearson correlation coefficient method was utilized. According to the results of the analysis, most of the stations in the region have a high negative correlation NCPI in all seasons, especially in the winter season in terms of annual and seasonal mean temperature (statistically at significant at the 90% level). Besides, high negative correlation values between NCPI and precipitation totals are observed during the winter season at the most of stations. Furthermore, the NCPI values were divided into two group as NCPI(-) and NCPI(+), and then mean temperature and precipitation total values, which are grouped according to the NCP(-) and NCP(+) phases, were determined as annual and seasonal. During the NCPI(-), higher mean temperature values are observed in all of seasons, particularly in the winter season compared to the mean temperature values under effect of NCP(+). Similarly, during the NCPI(-) in winter season precipitation total values have higher than the precipitation total values under the effect of NCP(+); however, in other seasons there no substantial changes were observed between the precipitation total values. As a result of this study, significant proof is obtained with regards to the influences of NCP on the temperature and precipitation regime in the Aegean region of Turkey.

Keywords: Aegean Region, North Sea Caspian Pattern, precipitation, temperature.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1230
407 Automatic Removal of Ocular Artifacts using JADE Algorithm and Neural Network

Authors: V Krishnaveni, S Jayaraman, A Gunasekaran, K Ramadoss

Abstract:

The ElectroEncephaloGram (EEG) is useful for clinical diagnosis and biomedical research. EEG signals often contain strong ElectroOculoGram (EOG) artifacts produced by eye movements and eye blinks especially in EEG recorded from frontal channels. These artifacts obscure the underlying brain activity, making its visual or automated inspection difficult. The goal of ocular artifact removal is to remove ocular artifacts from the recorded EEG, leaving the underlying background signals due to brain activity. In recent times, Independent Component Analysis (ICA) algorithms have demonstrated superior potential in obtaining the least dependent source components. In this paper, the independent components are obtained by using the JADE algorithm (best separating algorithm) and are classified into either artifact component or neural component. Neural Network is used for the classification of the obtained independent components. Neural Network requires input features that exactly represent the true character of the input signals so that the neural network could classify the signals based on those key characters that differentiate between various signals. In this work, Auto Regressive (AR) coefficients are used as the input features for classification. Two neural network approaches are used to learn classification rules from EEG data. First, a Polynomial Neural Network (PNN) trained by GMDH (Group Method of Data Handling) algorithm is used and secondly, feed-forward neural network classifier trained by a standard back-propagation algorithm is used for classification and the results show that JADE-FNN performs better than JADEPNN.

Keywords: Auto Regressive (AR) Coefficients, Feed Forward Neural Network (FNN), Joint Approximation Diagonalisation of Eigen matrices (JADE) Algorithm, Polynomial Neural Network (PNN).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1889
406 Assessment of Breeding Soundness by Comparative Radiography and Ultrasonography of Rabbit Testes

Authors: Adenike O. Olatunji-Akioye, Emmanual B Farayola

Abstract:

In order to improve the animal protein recommended daily intake of Nigerians, there is an upsurge in breeding of hitherto shunned food animals one of which is the rabbit. Radiography and ultrasonography are tools for diagnosing disease and evaluating the anatomical architecture of parts of the body non-invasively. As the rabbit is becoming a more important food animal, to achieve improved breeding of these animals, the best of the species form a breeding stock and will usually depend on breeding soundness which may be evaluated by assessment of the male reproductive organs by these tools. Four male intact rabbits weighing between 1.2 to 1.5 kg were acquired and acclimatized for 2 weeks. Dorsoventral views of the testes were acquired using a digital radiographic machine and a 5 MHz portable ultrasound scanner was used to acquire images of the testes in longitudinal, sagittal and transverse planes. Radiographic images acquired revealed soft tissue images of the testes in all rabbits. The testes lie in individual scrotal sacs sides on both sides of the midline at the level of the caudal vertebrae and thus are superimposed by caudal vertebrae and the caudal limits of the pelvic girdle. The ultrasonographic images revealed mostly homogenously hypoechogenic testes and a hyperechogenic mediastinum testis. The dorsal and ventral poles of the testes were heterogeneously hypoechogenic and correspond to the epididymis and spermatic cord. The rabbit is unique in the ability to retract the testes particularly when stressed and so careful and stressless handling during the procedures is of paramount importance. The imaging of rabbit testes can be safely done using both imaging methods but ultrasonography is a better method of assessment and evaluation of soundness for breeding.

Keywords: Breeding soundness, rabbits, radiography, ultrasonography.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 885
405 Bio-Surfactant Production and Its Application in Microbial EOR

Authors: A. Rajesh Kanna, G. Suresh Kumar, Sathyanaryana N. Gummadi

Abstract:

There are various sources of energies available worldwide and among them, crude oil plays a vital role. Oil recovery is achieved using conventional primary and secondary recovery methods. In-order to recover the remaining residual oil, technologies like Enhanced Oil Recovery (EOR) are utilized which is also known as tertiary recovery. Among EOR, Microbial enhanced oil recovery (MEOR) is a technique which enables the improvement of oil recovery by injection of bio-surfactant produced by microorganisms. Bio-surfactant can retrieve unrecoverable oil from the cap rock which is held by high capillary force. Bio-surfactant is a surface active agent which can reduce the interfacial tension and reduce viscosity of oil and thereby oil can be recovered to the surface as the mobility of the oil is increased. Research in this area has shown promising results besides the method is echo-friendly and cost effective compared with other EOR techniques. In our research, on laboratory scale we produced bio-surfactant using the strain Pseudomonas putida (MTCC 2467) and injected into designed simple sand packed column which resembles actual petroleum reservoir. The experiment was conducted in order to determine the efficiency of produced bio-surfactant in oil recovery. The column was made of plastic material with 10 cm in length. The diameter was 2.5 cm. The column was packed with fine sand material. Sand was saturated with brine initially followed by oil saturation. Water flooding followed by bio-surfactant injection was done to determine the amount of oil recovered. Further, the injection of bio-surfactant volume was varied and checked how effectively oil recovery can be achieved. A comparative study was also done by injecting Triton X 100 which is one of the chemical surfactant. Since, bio-surfactant reduced surface and interfacial tension oil can be easily recovered from the porous sand packed column.

Keywords: Bio-surfactant, Bacteria, Interfacial tension, Sand column.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2777
404 Difference in Psychological Well-Being Based On Comparison of Religions: A Case Study in Pekan District, Pahang, Malaysia

Authors: Amran Hassan, Fatimah Yusooff, Khadijah Alavi

Abstract:

The psychological well-being of a family is a subjective matter for evaluation, all the more when it involves the element of religions, whether Islam, Christianity, Buddhism or Hinduism. Each of these religions emphasises similar values and morals on family psychological well-being. This comparative study is specifically to determine the role of religion on family psychological well-being in Pekan district, Pahang, Malaysia. The study adopts a quantitative and qualitative mixed method design and considers a total of 412 samples of parents and children for the quantitative study, and 21 samples for the qualitative study. The quantitative study uses simple random sampling, whereas the qualitative sampling is purposive. The instrument for quantitative study is Ryff’s Psychological Well-being Scale and the qualitative study involves the construction of a guidelines protocol for in-depth interviews of respondents. The quantitative study uses the SPSS version .19 with One Way Anova, and the qualitative analysis is manual based on transcripts with specific codes and themes. The results show nonsignificance, that is, no significant difference among religions in all family psychological well-being constructs in the comparison of Islam, Christianity, Buddhism and Hinduism, thereby accepting a null hypothesis and rejecting an alternative hypothesis. The qualitative study supports the quantitative study, that is, all 21 respondents explain that no difference exists in psychological wellbeing in the comparison of teachings in all the religious mentioned. These implications may be used as guidelines for government and non-government bodies in considering religion as an important element in family psychological well-being in the long run. 

Keywords: Psychological well-being, comparison of religions, family, Malaysia.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2335
403 Incorporating Lexical-Semantic Knowledge into Convolutional Neural Network Framework for Pediatric Disease Diagnosis

Authors: Xiaocong Liu, Huazhen Wang, Ting He, Xiaozheng Li, Weihan Zhang, Jian Chen

Abstract:

The utilization of electronic medical record (EMR) data to establish the disease diagnosis model has become an important research content of biomedical informatics. Deep learning can automatically extract features from the massive data, which brings about breakthroughs in the study of EMR data. The challenge is that deep learning lacks semantic knowledge, which leads to impracticability in medical science. This research proposes a method of incorporating lexical-semantic knowledge from abundant entities into a convolutional neural network (CNN) framework for pediatric disease diagnosis. Firstly, medical terms are vectorized into Lexical Semantic Vectors (LSV), which are concatenated with the embedded word vectors of word2vec to enrich the feature representation. Secondly, the semantic distribution of medical terms serves as Semantic Decision Guide (SDG) for the optimization of deep learning models. The study evaluates the performance of LSV-SDG-CNN model on four kinds of Chinese EMR datasets. Additionally, CNN, LSV-CNN, and SDG-CNN are designed as baseline models for comparison. The experimental results show that LSV-SDG-CNN model outperforms baseline models on four kinds of Chinese EMR datasets. The best configuration of the model yielded an F1 score of 86.20%. The results clearly demonstrate that CNN has been effectively guided and optimized by lexical-semantic knowledge, and LSV-SDG-CNN model improves the disease classification accuracy with a clear margin.

Keywords: lexical semantics, feature representation, semantic decision, convolutional neural network, electronic medical record

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 594
402 Constraint Based Frequent Pattern Mining Technique for Solving GCS Problem

Authors: First G.M. Karthik, Second Ramachandra.V.Pujeri, Dr.

Abstract:

Generalized Center String (GCS) problem are generalized from Common Approximate Substring problem and Common substring problems. GCS are known to be NP-hard allowing the problems lies in the explosion of potential candidates. Finding longest center string without concerning the sequence that may not contain any motifs is not known in advance in any particular biological gene process. GCS solved by frequent pattern-mining techniques and known to be fixed parameter tractable based on the fixed input sequence length and symbol set size. Efficient method known as Bpriori algorithms can solve GCS with reasonable time/space complexities. Bpriori 2 and Bpriori 3-2 algorithm are been proposed of any length and any positions of all their instances in input sequences. In this paper, we reduced the time/space complexity of Bpriori algorithm by Constrained Based Frequent Pattern mining (CBFP) technique which integrates the idea of Constraint Based Mining and FP-tree mining. CBFP mining technique solves the GCS problem works for all center string of any length, but also for the positions of all their mutated copies of input sequence. CBFP mining technique construct TRIE like with FP tree to represent the mutated copies of center string of any length, along with constraints to restraint growth of the consensus tree. The complexity analysis for Constrained Based FP mining technique and Bpriori algorithm is done based on the worst case and average case approach. Algorithm's correctness compared with the Bpriori algorithm using artificial data is shown.

Keywords: Constraint Based Mining, FP tree, Data mining, GCS problem, CBFP mining technique.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1702
401 Role of Fish Hepatic Aldehyde Oxidase in Oxidative in vitro Metabolism of Phenanthridine Heterocyclic Aromatic Compound

Authors: Khaled S. Al Salhen

Abstract:

Aldehyde oxidase is molybdo-flavoenzyme involved in the oxidation of hundreds of endogenous and exogenous and N-heterocyclic compounds and environmental pollutants. Uncharged N-heterocyclic aromatic compounds such phenanthridine are commonly distributed pollutants in soil, air, sediments, surface water and groundwater, and in animal and plant tissues. Phenanthridine as uncharged N-heterocyclic aromatic compound was incubated with partially purified aldehyde oxidase from rainbow trout fish liver. Reversed-phase HLPC method was used to separate the oxidation products from phenanthridine and the metabolite was identified. The 6(5H)-phenanthridinone was identified the major metabolite by partially purified aldehyde oxidase from fish liver. Kinetic constant for the oxidation reactions were determined spectrophotometrically and showed that this substrate has a good affinity (Km = 78 ± 7.6µM) for hepatic aldehyde oxidase, will be a significant pathway. This study confirms that partially purified aldehyde oxidase from fish liver is indeed the enzyme responsible for the in vitro production 6(5H)-phenanthridinone metabolite as it is a major metabolite by mammalian aldehyde oxidase, coupled with a relatively high oxidation rate (0.77± 0.03 nmol/min/mg protein). In addition, the kinetic parameters of hepatic fish aldehyde oxidase towards the phenanthridine substrate indicate that in vitro biotransformation by hepatic fish aldehyde oxidase will be a significant pathway. This study confirms that partially purified aldehyde oxidase from fish liver is indeed the enzyme responsible for the in vitro production 6(5H)-phenanthridinone metabolite as it is a major metabolite by mammalian aldehyde oxidase.

Keywords: Aldehyde oxidase, Fish, Phenanthridine, Specificity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2280
400 Opponent Color and Curvelet Transform Based Image Retrieval System Using Genetic Algorithm

Authors: Yesubai Rubavathi Charles, Ravi Ramraj

Abstract:

In order to retrieve images efficiently from a large database, a unique method integrating color and texture features using genetic programming has been proposed. Opponent color histogram which gives shadow, shade, and light intensity invariant property is employed in the proposed framework for extracting color features. For texture feature extraction, fast discrete curvelet transform which captures more orientation information at different scales is incorporated to represent curved like edges. The recent scenario in the issues of image retrieval is to reduce the semantic gap between user’s preference and low level features. To address this concern, genetic algorithm combined with relevance feedback is embedded to reduce semantic gap and retrieve user’s preference images. Extensive and comparative experiments have been conducted to evaluate proposed framework for content based image retrieval on two databases, i.e., COIL-100 and Corel-1000. Experimental results clearly show that the proposed system surpassed other existing systems in terms of precision and recall. The proposed work achieves highest performance with average precision of 88.2% on COIL-100 and 76.3% on Corel, the average recall of 69.9% on COIL and 76.3% on Corel. Thus, the experimental results confirm that the proposed content based image retrieval system architecture attains better solution for image retrieval.

Keywords: Content based image retrieval, Curvelet transform, Genetic algorithm, Opponent color histogram, Relevance feedback.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1822
399 Fake Account Detection in Twitter Based on Minimum Weighted Feature set

Authors: Ahmed El Azab, Amira M. Idrees, Mahmoud A. Mahmoud, Hesham Hefny

Abstract:

Social networking sites such as Twitter and Facebook attracts over 500 million users across the world, for those users, their social life, even their practical life, has become interrelated. Their interaction with social networking has affected their life forever. Accordingly, social networking sites have become among the main channels that are responsible for vast dissemination of different kinds of information during real time events. This popularity in Social networking has led to different problems including the possibility of exposing incorrect information to their users through fake accounts which results to the spread of malicious content during life events. This situation can result to a huge damage in the real world to the society in general including citizens, business entities, and others. In this paper, we present a classification method for detecting the fake accounts on Twitter. The study determines the minimized set of the main factors that influence the detection of the fake accounts on Twitter, and then the determined factors are applied using different classification techniques. A comparison of the results of these techniques has been performed and the most accurate algorithm is selected according to the accuracy of the results. The study has been compared with different recent researches in the same area; this comparison has proved the accuracy of the proposed study. We claim that this study can be continuously applied on Twitter social network to automatically detect the fake accounts; moreover, the study can be applied on different social network sites such as Facebook with minor changes according to the nature of the social network which are discussed in this paper.

Keywords: Fake accounts detection, classification algorithms, twitter accounts analysis, features based techniques.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5837
398 Assessing the Effect of the Position of the Cavities on the Inner Plate of the Steel Shear Wall under Time History Dynamic Analysis

Authors: Masoud Mahdavi, Mojtaba Farzaneh Moghadam

Abstract:

The seismic forces caused by the waves created in the depths of the earth during the earthquake hit the structure and cause the building to vibrate. Creating large seismic forces will cause low-strength sections in the structure to suffer extensive surface damage. The use of new steel shear walls in steel structures has caused the strength of the building and its main members (columns) to increase due to the reduction and depreciation of seismic forces during earthquakes. In the present study, an attempt was made to evaluate a type of steel shear wall that has regular holes in the inner sheet by modeling the finite element model with Abacus software. The shear wall of the steel plate, measuring 6000 × 3000 mm (one floor) and 3 mm thickness, was modeled with four different pores with a cross-sectional area. The shear wall was dynamically subjected to a time history of 5 seconds by three accelerators, El Centro, Imperial Valley and Kobe. The results showed that increasing the distance between the geometric center of the hole and the geometric center of the inner plate in the steel shear wall (increasing the RCS index) caused the total maximum acceleration to be transferred from the perimeter of the hole to horizontal and vertical beams. The results also show that there is no direct relationship between RCS index and total acceleration in steel shear wall and RCS index is separate from the peak ground acceleration value of earthquake.

Keywords: Hollow Steel plate shear wall, time history analysis, finite element method, Abaqus Software.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 575
397 Analysis and Control of Camera Type Weft Straightener

Authors: Jae-Yong Lee, Gyu-Hyun Bae, Yun-Soo Chung, Dae-Sub Kim, Jae-Sung Bae

Abstract:

In general, fabric is heat-treated using a stenter machine in order to dry and fix its shape. It is important to shape before the heat treatment because it is difficult to revert back once the fabric is formed. To produce the product of right shape, camera type weft straightener has been applied recently to capture and process fabric images quickly. It is more powerful in determining the final textile quality rather than photo-sensor. Positioning in front of a stenter machine, weft straightener helps to spread fabric evenly and control the angle between warp and weft constantly as right angle by handling skew and bow rollers. To process this tricky procedure, the structural analysis should be carried out in advance, based on which, its control technology can be drawn. A structural analysis is to figure out the specific contact/slippage characteristics between fabric and roller. We already examined the applicability of camera type weft straightener to plain weave fabric and found its possibility and the specific working condition of machine and rollers. In this research, we aimed to explore another applicability of camera type weft straightener. Namely, we tried to figure out camera type weft straightener can be used for fabrics. To find out the optimum condition, we increased the number of rollers. The analysis is done by ANSYS software using Finite Element Analysis method. The control function is demonstrated by experiment. In conclusion, the structural analysis of weft straightener is done to identify a specific characteristic between roller and fabrics. The control of skew and bow roller is done to decrease the error of the angle between warp and weft. Finally, it is proved that camera type straightener can also be used for the special fabrics.

Keywords: Camera type weft straightener, structure analysis, control, skew and bow roller.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1451
396 Development and Optimization of Colon Targeted Drug Delivery System of Ayurvedic Churna Formulation Using Eudragit L100 and Ethyl Cellulose as Coating Material

Authors: Anil Bhandari, Imran Khan Pathan, Peeyush K. Sharma, Rakesh K. Patel, Suresh Purohit

Abstract:

The purpose of this study was to prepare time and pH dependent release tablets of Ayurvedic Churna formulation and evaluate their advantages as colon targeted drug delivery system. The Vidangadi Churna was selected for this study which contains Embelin and Gallic acid. Embelin is used in Helminthiasis as therapeutic agent. Embelin is insoluble in water and unstable in gastric environment so it was formulated in time and pH dependent tablets coated with combination of two polymers Eudragit L100 and ethyl cellulose. The 150mg of core tablet of dried extract and lactose were prepared by wet granulation method. The compression coating was used in the polymer concentration of 150mg for both the layer as upper and lower coating tablet was investigated. The results showed that no release was found in 0.1 N HCl and pH 6.8 phosphate buffers for initial 5 hours and about 98.97% of the drug was released in pH 7.4 phosphate buffer in total 17 Hours. The in vitro release profiles of drug from the formulation could be best expressed first order kinetics as highest linearity (r2= 0.9943). The results of the present study have demonstrated that the time and pH dependent tablets system is a promising vehicle for preventing rapid hydrolysis in gastric environment and improving oral bioavailability of Embelin and Gallic acid for treatment of Helminthiasis.

Keywords: Embelin, Gallic acid, Vidangadi Churna, Colon targeted drug delivery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2386
395 A Stochastic Diffusion Process Based on the Two-Parameters Weibull Density Function

Authors: Meriem Bahij, Ahmed Nafidi, Boujemâa Achchab, Sílvio M. A. Gama, José A. O. Matos

Abstract:

Stochastic modeling concerns the use of probability to model real-world situations in which uncertainty is present. Therefore, the purpose of stochastic modeling is to estimate the probability of outcomes within a forecast, i.e. to be able to predict what conditions or decisions might happen under different situations. In the present study, we present a model of a stochastic diffusion process based on the bi-Weibull distribution function (its trend is proportional to the bi-Weibull probability density function). In general, the Weibull distribution has the ability to assume the characteristics of many different types of distributions. This has made it very popular among engineers and quality practitioners, who have considered it the most commonly used distribution for studying problems such as modeling reliability data, accelerated life testing, and maintainability modeling and analysis. In this work, we start by obtaining the probabilistic characteristics of this model, as the explicit expression of the process, its trends, and its distribution by transforming the diffusion process in a Wiener process as shown in the Ricciaardi theorem. Then, we develop the statistical inference of this model using the maximum likelihood methodology. Finally, we analyse with simulated data the computational problems associated with the parameters, an issue of great importance in its application to real data with the use of the convergence analysis methods. Overall, the use of a stochastic model reflects only a pragmatic decision on the part of the modeler. According to the data that is available and the universe of models known to the modeler, this model represents the best currently available description of the phenomenon under consideration.

Keywords: Diffusion process, discrete sampling, likelihood estimation method, simulation, stochastic diffusion equation, trends functions, bi-parameters Weibull density function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1967
394 Analysis of Cost Estimation and Payment Systems for Consultant Contracts in the US, Japan, China and the UK

Authors: Shih-Hsu Wang, Yuan-Yuan Cheng, Ming-Tsung Lee, Wei-Chih Wang

Abstract:

Determining reasonable fees is the main objective of designing the cost estimation and payment systems for consultant contracts. However, project clients utilize different cost estimation and payment systems because of their varying views on the reasonableness of consultant fees. This study reviews the cost estimation and payment systems of consultant contracts for five countries, including the US (Washington State Department of Transportation), Japan (Ministry of Land, Infrastructure, Transport and Tourism), China (Engineering Design Charging Standard) and UK (Her Majesty's Treasure). Specifically, this work investigates the budgeting process, contractor selection method, contractual price negotiation process, cost review, and cost-control concept of the systems used in these countries. The main finding indicates that that project client-s view on whether the fee is high will affect the way he controls it. In the US, the fee is commonly considered to be high. As a result, stringent auditing system (low flexibility given to the consultant) is then applied. In the UK, the fee is viewed to be low by comparing it to the total life-cycle project cost. Thus, a system that has high flexibility in budgeting and cost reviewing is given to the consultant. In terms of the flexibility allowed for the consultant, the systems applied in Japan and China fall between those of the US and UK. Both the US and UK systems are helpful in determining a reasonable fee. However, in the US system, rigid auditing standards must be established and additional cost-audit manpower is required. In the UK system, sufficient historical cost data should be needed to evaluate the reasonableness of the consultant-s proposed fee

Keywords: Consultant Services, Cost Estimation and Payment System, Payment Flexibility, Cost-control Concept

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1686