Search results for: evolutionary computation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 860

Search results for: evolutionary computation

320 Seismic Hazard Analysis for a Multi Layer Fault System: Antalya (SW Turkey) Example

Authors: Nihat Dipova, Bulent Cangir

Abstract:

This article presents the results of probabilistic seismic hazard analysis (PSHA) for Antalya (SW Turkey). South west of Turkey is characterized by large earthquakes resulting from the continental collision between the African, Arabian and Eurasian plates and crustal faults. Earthquakes around the study area are grouped into two; crustal earthquakes (D=0-50 km) and subduction zone earthquakes (50-140 km). Maximum observed magnitude of subduction earthquakes is Mw=6.0. Maximum magnitude of crustal earthquakes is Mw=6.6. Sources for crustal earthquakes are faults which are related with Isparta Angle and Cyprus Arc tectonic structures. A new earthquake catalogue for Antalya, with unified moment magnitude scale has been prepared and seismicity of the area around Antalya city has been evaluated by defining ‘a’ and ‘b’ parameters of the Gutenberg-Richter recurrence relationship. The Standard Cornell-McGuire method has been used for hazard computation utilizing CRISIS2007 software. Attenuation relationships proposed by Chiou and Youngs (2008) has been used for 0-50 km earthquakes and Youngs et. al (1997) for deep subduction earthquakes. Finally, Seismic hazard map for peak horizontal acceleration on a uniform site condition of firm rock (average shear wave velocity of about 1130 m/s) at a hazard level of 10% probability of exceedance in 50 years has been prepared.

Keywords: Antalya, peak ground acceleration, seismic hazard assessment, subduction

Procedia PDF Downloads 358
319 Experiences of Timing Analysis of Parallel Embedded Software

Authors: Muhammad Waqar Aziz, Syed Abdul Baqi Shah

Abstract:

The execution time analysis is fundamental to the successful design and execution of real-time embedded software. In such analysis, the Worst-Case Execution Time (WCET) of a program is a key measure, on the basis of which system tasks are scheduled. The WCET analysis of embedded software is also needed for system understanding and to guarantee its behavior. WCET analysis can be performed statically (without executing the program) or dynamically (through measurement). Traditionally, research on the WCET analysis assumes sequential code running on single-core platforms. However, as computation is steadily moving towards using a combination of parallel programs and multi-core hardware, new challenges in WCET analysis need to be addressed. In this article, we report our experiences of performing the WCET analysis of Parallel Embedded Software (PES) running on multi-core platform. The primary purpose was to investigate how WCET estimates of PES can be computed statically, and how they can be derived dynamically. Our experiences, as reported in this article, include the challenges we faced, possible suggestions to these challenges and the workarounds that were developed. This article also provides observations on the benefits and drawbacks of deriving the WCET estimates using the said methods and provides useful recommendations for further research in this area.

Keywords: embedded software, worst-case execution-time analysis, static flow analysis, measurement-based analysis, parallel computing

Procedia PDF Downloads 307
318 Functional Role of Tyr12 in the Catalytic Activity of Zeta-Like Glutathione S-Transferase from Acidovorax sp. KKS102

Authors: D. Shehu, Z. Alias

Abstract:

Glutathione S-transferases (GSTs) are family of enzymes that function in the detoxification of variety of electrophilic substrates. In the present work, we report a novel zeta-like GST (designated as KKSG9) from the biphenyl/polychlorobiphenyl degrading organism Acidovorax sp. KKS102. KKSG9 possessed low sequence similarity but similar biochemical properties to zeta class GSTs. The gene for KKSG9 was cloned, purified and biochemically characterized. Functional analysis showed that the enzyme exhibits wider substrate specificity compared to most zeta class GSTs by reacting with 1-chloro-2,4-dinitrobenzene (CDNB), p-nitrobenzyl chloride (NBC), ethacrynic acid (EA), hydrogen peroxide, and cumene hydroperoxide (CuOOH). The enzyme also displayed dehalogenation function against dichloroacetate (a common substrate for zeta class GSTs) in addition to permethrin, and dieldrin. The functional role of Tyr12 was also investigated by site-directed mutagenesis. The mutant (Y12C) displayed low catalytic activity and dehalogenation function against all the substrates when compared with the wild type. Kinetic analysis using NBC and GSH as substrates showed that the mutant (Y12C) displayed a higher affinity for NBC when compared with the wild type, however, no significant change in GSH affinity was observed. These findings suggest that the presence of tyrosine residue in the motif might represent an evolutionary trend toward improving the catalytic activity of the enzyme. The enzyme as well could be useful in the bioremediation of various types of organochlorine pollutants.

Keywords: Acidovorax sp. KKS102, bioremediation, glutathione s-transferase, site-directed mutagenesis, zeta

Procedia PDF Downloads 138
317 A Decision Support System to Detect the Lumbar Disc Disease on the Basis of Clinical MRI

Authors: Yavuz Unal, Kemal Polat, H. Erdinc Kocer

Abstract:

In this study, a decision support system comprising three stages has been proposed to detect the disc abnormalities of the lumbar region. In the first stage named the feature extraction, T2-weighted sagittal and axial Magnetic Resonance Images (MRI) were taken from 55 people and then 27 appearance and shape features were acquired from both sagittal and transverse images. In the second stage named the feature weighting process, k-means clustering based feature weighting (KMCBFW) proposed by Gunes et al. Finally, in the third stage named the classification process, the classifier algorithms including multi-layer perceptron (MLP- neural network), support vector machine (SVM), Naïve Bayes, and decision tree have been used to classify whether the subject has lumbar disc or not. In order to test the performance of the proposed method, the classification accuracy (%), sensitivity, specificity, precision, recall, f-measure, kappa value, and computation times have been used. The best hybrid model is the combination of k-means clustering based feature weighting and decision tree in the detecting of lumbar disc disease based on both sagittal and axial MR images.

Keywords: lumbar disc abnormality, lumbar MRI, lumbar spine, hybrid models, hybrid features, k-means clustering based feature weighting

Procedia PDF Downloads 501
316 Stability Enhancement of a Large-Scale Power System Using Power System Stabilizer Based on Adaptive Neuro Fuzzy Inference System

Authors: Agung Budi Muljono, I Made Ginarsa, I Made Ari Nrartha

Abstract:

A large-scale power system (LSPS) consists of two or more sub-systems connected by inter-connecting transmission. Loading pattern on an LSPS always changes from time to time and varies depend on consumer need. The serious instability problem is appeared in an LSPS due to load fluctuation in all of the bus. Adaptive neuro-fuzzy inference system (ANFIS)-based power system stabilizer (PSS) is presented to cover the stability problem and to enhance the stability of an LSPS. The ANFIS control is presented because the ANFIS control is more effective than Mamdani fuzzy control in the computation aspect. Simulation results show that the presented PSS is able to maintain the stability by decreasing peak overshoot to the value of −2.56 × 10−5 pu for rotor speed deviation Δω2−3. The presented PSS also makes the settling time to achieve at 3.78 s on local mode oscillation. Furthermore, the presented PSS is able to improve the peak overshoot and settling time of Δω3−9 to the value of −0.868 × 10−5 pu and at the time of 3.50 s for inter-area oscillation.

Keywords: ANFIS, large-scale, power system, PSS, stability enhancement

Procedia PDF Downloads 291
315 The Development Stages of Transformation of Water Policy Management in Victoria

Authors: Ratri Werdiningtyas, Yongping Wei, Andrew Western

Abstract:

The status quo of social-ecological systems is the results of not only natural processes but also the accumulated consequence of policies applied in the past. Often water management objectives are challenging and are only achieved to a limited degree on the ground. In choosing water management approaches, it is important to account for current conditions and important differences due to varied histories. Since the mid-nineteenth century, Victorian water management has evolved through a series of policy regime shifts. The main goal of this research to explore and identify the stages of the evolution of the water policy instruments as practiced in Victoria from 1890-2016. This comparative historical analysis has identified four stages in Victorian policy instrument development. In the first stage, the creation of policy instruments aimed to match the demand and supply of the resource (reserve condition). The second stage begins after natural system alone failed to balance supply and demand. The focus of the policy instrument shifted to an authority perspective in this stage. Later, the increasing number of actors interested in water led to another change in policy instrument. The third stage focused on the significant role of information from different relevant actors. The fourth and current stage is the most advanced, in that it involved the creation of a policy instrument for synergizing the previous three focal factors: reserve, authority, and information. When considering policy in other jurisdiction, these findings suggest that a key priority should be to reflect on the jurisdictions current position among these four evolutionary stages and try to make improve progressively rather than directly adopting approaches from elsewhere without understanding the current position.

Keywords: policy instrument, policy transformation, socio-ecolgical system, water management

Procedia PDF Downloads 126
314 Triangulations via Iterated Largest Angle Bisection

Authors: Yeonjune Kang

Abstract:

A triangulation of a planar region is a partition of that region into triangles. In the finite element method, triangulations are often used as the grid underlying a computation. In order to be suitable as a finite element mesh, a triangulation must have well-shaped triangles, according to criteria that depend on the details of the particular problem. For instance, most methods require that all triangles be small and as close to the equilateral shape as possible. Stated differently, one wants to avoid having either thin or flat triangles in the triangulation. There are many triangulation procedures, a particular one being the one known as the longest edge bisection algorithm described below. Starting with a given triangle, locate the midpoint of the longest edge and join it to the opposite vertex of the triangle. Two smaller triangles are formed; apply the same bisection procedure to each of these triangles. Continuing in this manner after n steps one obtains a triangulation of the initial triangle into 2n smaller triangles. The longest edge algorithm was first considered in the late 70’s. It was shown by various authors that this triangulation has the desirable properties for the finite element method: independently of the number of iterations the angles of these triangles cannot get too small; moreover, the size of the triangles decays exponentially. In the present paper we consider a related triangulation algorithm we refer to as the largest angle bisection procedure. As the name suggests, rather than bisecting the longest edge, at each step we bisect the largest angle. We study the properties of the resulting triangulation and prove that, while the general behavior resembles the one in the longest edge bisection algorithm, there are several notable differences as well.

Keywords: angle bisectors, geometry, triangulation, applied mathematics

Procedia PDF Downloads 375
313 Simulation of Improving the Efficiency of a Fire-Tube Steam Boiler

Authors: Roudane Mohamed

Abstract:

In this study we are interested in improving the efficiency of a steam boiler to 4.5T/h and minimize fume discharge temperature by the addition of a heat exchanger against the current in the energy system, the output of the boiler. The mathematical approach to the problem is based on the use of heat transfer by convection and conduction equations. These equations have been chosen because of their extensive use in a wide range of application. A software and developed for solving the equations governing these phenomena and the estimation of the thermal characteristics of boiler through the study of the thermal characteristics of the heat exchanger by both LMTD and NUT methods. Subsequently, an analysis of the thermal performance of the steam boiler by studying the influence of different operating parameters on heat flux densities, temperatures, exchanged power and performance was carried out. The study showed that the behavior of the boiler is largely influenced. In the first regime (P = 3.5 bar), the boiler efficiency has improved significantly from 93.03 to 99.43 at the rate of 6.47% and 4.5%. For maximum speed, the change is less important, it is of the order of 1.06%. The results obtained in this study of great interest to industrial utilities equipped with smoke tube boilers for the preheating air temperature intervene to calculate the actual temperature of the gas so the heat exchanged will be increased and minimize temperature smoke discharge. On the other hand, this work could be used as a model of computation in the design process.

Keywords: numerical simulation, efficiency, fire tube, heat exchanger, convection and conduction

Procedia PDF Downloads 206
312 Exploring the Role of Data Mining in Crime Classification: A Systematic Literature Review

Authors: Faisal Muhibuddin, Ani Dijah Rahajoe

Abstract:

This in-depth exploration, through a systematic literature review, scrutinizes the nuanced role of data mining in the classification of criminal activities. The research focuses on investigating various methodological aspects and recent developments in leveraging data mining techniques to enhance the effectiveness and precision of crime categorization. Commencing with an exposition of the foundational concepts of crime classification and its evolutionary dynamics, this study details the paradigm shift from conventional methods towards approaches supported by data mining, addressing the challenges and complexities inherent in the modern crime landscape. Specifically, the research delves into various data mining techniques, including K-means clustering, Naïve Bayes, K-nearest neighbour, and clustering methods. A comprehensive review of the strengths and limitations of each technique provides insights into their respective contributions to improving crime classification models. The integration of diverse data sources takes centre stage in this research. A detailed analysis explores how the amalgamation of structured data (such as criminal records) and unstructured data (such as social media) can offer a holistic understanding of crime, enriching classification models with more profound insights. Furthermore, the study explores the temporal implications in crime classification, emphasizing the significance of considering temporal factors to comprehend long-term trends and seasonality. The availability of real-time data is also elucidated as a crucial element in enhancing responsiveness and accuracy in crime classification.

Keywords: data mining, classification algorithm, naïve bayes, k-means clustering, k-nearest neigbhor, crime, data analysis, sistematic literature review

Procedia PDF Downloads 45
311 Investigation on a Wave-Powered Electrical Generator Consisted of a Geared Motor-Generator Housed by a Double-Cone Rolling on Concentric Circular Rails

Authors: Barenten Suciu

Abstract:

An electrical generator able to harness energy from the water waves and designed as a double-cone geared motor-generator (DCGMG), is proposed and theoretically investigated. Similar to a differential gear mechanism, used in the transmission system of the auto vehicle wheels, an angular speed differential is created between the cones rolling on two concentric circular rails. Water wave acting on the floating DCGMG produces and a gear-box amplifies the speed differential to gain sufficient torque for power generation. A model that allows computation of the speed differential, torque, and power of the DCGMG is suggested. Influence of various parameters, regarding the construction of the DCGMG, as well as the contact between the double-cone and rails, on the electro-mechanical output, is emphasized. Results obtained indicate that the generated electrical power can be increased by augmenting the mass of the double-cone, the span of the rails, the apex angle of the cones, the friction between cones and rails, the amplification factor of the gear-box, and the efficiency of the motor-generator. Such findings are useful to formulate a design methodology for the proposed wave-powered generator.

Keywords: amplification of angular speed differential, circular concentric rails, double-cone, wave-powered electrical generator

Procedia PDF Downloads 139
310 Testing a Flexible Manufacturing System Facility Production Capacity through Discrete Event Simulation: Automotive Case Study

Authors: Justyna Rybicka, Ashutosh Tiwari, Shane Enticott

Abstract:

In the age of automation and computation aiding manufacturing, it is clear that manufacturing systems have become more complex than ever before. Although technological advances provide the capability to gain more value with fewer resources, sometimes utilisation of the manufacturing capabilities available to organisations is difficult to achieve. Flexible manufacturing systems (FMS) provide a unique capability to manufacturing organisations where there is a need for product range diversification by providing line efficiency through production flexibility. This is very valuable in trend driven production set-ups or niche volume production requirements. Although FMS provides flexible and efficient facilities, its optimal set-up is key in achieving production performance. As many variables are interlinked due to the flexibility provided by the FMS, analytical calculations are not always sufficient to predict the FMS’ performance. Simulation modelling is capable of capturing the complexity and constraints associated with FMS. This paper demonstrates how discrete event simulation (DES) can address complexity in an FMS to optimise the production line performance. A case study of an automotive FMS is presented. The DES model demonstrates different configuration options depending on prioritising objectives: utilisation and throughput. Additionally, this paper provides insight into understanding the impact of system set-up constraints on the FMS performance and demonstrates the exploration into the optimal production set-up.

Keywords: discrete event simulation, flexible manufacturing system, capacity performance, automotive

Procedia PDF Downloads 311
309 Appraising the Evolution of Architecture as the Representation of Material Culture: The Nigerian Digest

Authors: Ikenna Emmanuel Idoko

Abstract:

Evolution and evolutionary processes are phenomena that have come to stay in the fabrics of the universal living, hence expressions such as universal evolution. These evolutions in the universe cut across all facets of human accomplishments, which architecture is a part of. There is a notion in political sciences that politics and the act of politicking are local, meaning that politics and political processes are unique and peculiar to a people, all dependent on their sociocultural makeup. The notion is also applicable in architecture because the architecture of a people is mostly dependent on several factors such as climatic conditions, material availability, socio-cultural beliefs and religious inclinations. Stemming from the cultural dimension, it is of course common knowledge that every society is driven by its own unique culture. The fusion of architecture and culture creates the actual uniqueness which underlines the “archi-cultural” representation of a people’s material culture. This paper is aimed at appraising architectural evolution as it affects the representation of the material culture of a people. For effective systemization of the aim, various spectacular kinds of literature were reviewed, coupled with the visitation and study of existing buildings in Nigeria to properly understand the live peculiarity in the architecture of the selected area. Since architecture needs a lot of pictorial pieces of evidence, pictures and graphical representations were extensively utilized, and channelled to aid a better understanding of the study. Amongst all, an important part of this paper is that it adds to the body of existing knowledge in the Arts and Humanities by speaking extensively to the tenets of cultural representation on buildings. Similarly, the field of architecture, specifically, traditional architecture, would be gaining some extra knowledge owing to the study of some important almost-neglected or forgotten architectural elements of various traditional buildings.

Keywords: evolution, architecture, material, culture

Procedia PDF Downloads 37
308 Molecular Identification and Evolutionary Status of Lucilia bufonivora: An Obligate Parasite of Amphibians in Europe

Authors: Gerardo Arias, Richard Wall, Jamie Stevens

Abstract:

Lucilia bufonivora Moniez, is an obligate parasite of toads and frogs widely distributed in Europe. Its sister taxon Lucilia silvarum Meigen behaves mainly as a carrion breeder in Europe, however it has been reported as a facultative parasite of amphibians. These two closely related species are morphologically almost identical, which has led to misidentification, and in fact, it has been suggested that the amphibian myiasis cases by L. silvarum reported in Europe should be attributed to L. bufonivora. Both species remain poorly studied and their taxonomic relationships are still unclear. The identification of the larval specimens involved in amphibian myiasis with molecular tools and phylogenetic analysis of these two closely related species may resolve this problem. In this work seventeen unidentified larval specimens extracted from toad myiasis cases of the UK, the Netherlands and Switzerland were obtained, their COX1 (mtDNA) and EF1-α (Nuclear DNA) gene regions were amplified and then sequenced. The 17 larval samples were identified with both molecular markers as L. bufonivora. Phylogenetic analysis was carried out with 10 other blowfly species, including L. silvarum samples from the UK and USA. Bayesian Inference trees of COX1 and a combined-gene dataset suggested that L. silvarum and L. bufonivora are separate sister species. However, the nuclear gene EF1-α does not appear to resolve their relationships, suggesting that the rates of evolution of the mtDNA are much faster than those of the nuclear DNA. This work provides the molecular evidence for successful identification of L. bufonivora and a molecular analysis of the populations of this obligate parasite from different locations across Europe. The relationships with L. silvarum are discussed.

Keywords: calliphoridae, molecular evolution, myiasis, obligate parasitism

Procedia PDF Downloads 215
307 Chaotic Electronic System with Lambda Diode

Authors: George Mahalu

Abstract:

The Chua diode has been configured over time in various ways, using electronic structures like as operational amplifiers (OAs) or devices with gas or semiconductors. When discussing the use of semiconductor devices, tunnel diodes (Esaki diodes) are most often considered, and more recently, transistorized configurations such as lambda diodes. The paper-work proposed here uses in the modeling a lambda diode type configuration consisting of two Junction Field Effect Transistors (JFET). The original scheme is created in the MULTISIM electronic simulation environment and is analyzed in order to identify the conditions for the appearance of evolutionary unpredictability specific to nonlinear dynamic systems with chaos-induced behavior. The chaotic deterministic oscillator is one autonomous type, a fact that places it in the class of Chua’s type oscillators, the only significant and most important difference being the presence of a nonlinear device like the one mentioned structure above. The chaotic behavior is identified both by means of strange attractor-type trajectories and visible during the simulation and by highlighting the hypersensitivity of the system to small variations of one of the input parameters. The results obtained through simulation and the conclusions drawn are useful in the further research of ways to implement such constructive electronic solutions in theoretical and practical applications related to modern small signal amplification structures, to systems for encoding and decoding messages through various modern ways of communication, as well as new structures that can be imagined both in modern neural networks and in those for the physical implementation of some requirements imposed by current research with the aim of obtaining practically usable solutions in quantum computing and quantum computers.

Keywords: chaos, lambda diode, strange attractor, nonlinear system

Procedia PDF Downloads 64
306 Faster, Lighter, More Accurate: A Deep Learning Ensemble for Content Moderation

Authors: Arian Hosseini, Mahmudul Hasan

Abstract:

To address the increasing need for efficient and accurate content moderation, we propose an efficient and lightweight deep classification ensemble structure. Our approach is based on a combination of simple visual features, designed for high-accuracy classification of violent content with low false positives. Our ensemble architecture utilizes a set of lightweight models with narrowed-down color features, and we apply it to both images and videos. We evaluated our approach using a large dataset of explosion and blast contents and compared its performance to popular deep learning models such as ResNet-50. Our evaluation results demonstrate significant improvements in prediction accuracy, while benefiting from 7.64x faster inference and lower computation cost. While our approach is tailored to explosion detection, it can be applied to other similar content moderation and violence detection use cases as well. Based on our experiments, we propose a "think small, think many" philosophy in classification scenarios. We argue that transforming a single, large, monolithic deep model into a verification-based step model ensemble of multiple small, simple, and lightweight models with narrowed-down visual features can possibly lead to predictions with higher accuracy.

Keywords: deep classification, content moderation, ensemble learning, explosion detection, video processing

Procedia PDF Downloads 28
305 Powerful Media: Reflection of Professional Audience

Authors: Hamide Farshad, Mohammadreza Javidi Abdollah Zadeh Aval

Abstract:

As a result of the growing penetration of the media into human life, a new role under the title of "audience" is defined in the social life .A kind of role which is dramatically changed since its formation. This article aims to define the audience position in the new media equations which is concluded to the transformation of the media role. By using the Library and Attributive method to study the history, the evolutionary outlook to the audience and the recognition of the audience and the media relation in the new media context is studied. It was perceived in past that public communication would result in receiving the audience. But after the emergence of the interactional media and transformation in the audience social life, a new kind of public communication is formed, and also the imaginary picture of the audience is replaced by the audience impact on the communication process. Part of this impact can be seen in the form of feedback which is one of the public communication elements. In public communication, the audience feedback is completely accepted. But in many cases, and along with the audience feedback, the media changes its direction; this direction shift is known as media feedback. At this state, the media and the audience are both doers and consistently change their positions in an interaction. With the greater number of the audience and the media, this process has taken a new role, and the role of this doer is sometimes taken by an audience while influencing another audience, or a media while influencing another media. In this article, this multiple public communication process is shown through representing a model under the title of ”The bilateral influence of the audience and the media.” Based on this model, the audience and the media power are not the two sides of a coin, and as a result, by accepting these two as the doers, the bilateral power of the audience and the media will be complementary to each other. Also more, the compatibility between the media and the audience is analyzed in the bilateral and interactional relation hypothesis, and by analyzing the action law hypothesis, the dos and don’ts of this role are defined, and media is obliged to know and accept them in order to be able to survive. They also have a determining role in the strategic studies of a media.

Keywords: audience, effect, media, interaction, action laws

Procedia PDF Downloads 465
304 Three-Dimensional Unsteady Natural Convection and Entropy Generation in an Inclined Cubical Trapezoidal Cavity Subjected to Uniformly Heated Bottom Wall

Authors: Farshid Fathinia

Abstract:

Numerical computation of unsteady laminar three-dimensional natural convection and entropy generation in an inclined cubical trapezoidal air-filled cavity is performed for the first time in this work. The vertical right and left sidewalls of the cavity are maintained at constant cold temperatures. The lower wall is subjected to a constant hot temperature, while the upper one is considered insulated. Computations are performed for Rayleigh numbers varied as 103 ≤ Ra ≤ 105, while the trapezoidal cavity inclination angle is varied as 0° ≤ ϕ ≤ 180°. Prandtl number is considered constant at Pr = 0.71. The second law of thermodynamics is applied to obtain thermodynamic losses inside the cavity due to both heat transfer and fluid friction irreversibilities. The variation of local and average Nusselt numbers are presented and discussed.While, streamlines, isotherms and entropy contours are presented in both two and three-dimensional pattern. The results show that when the Rayleigh number increases, the flow patterns are changed especially in three-dimensional results and the flow circulation increases. Also, the inclination angle effect on the total entropy generation becomes insignificant when the Rayleigh number is low.Moreover, when the Rayleigh number increases the average Nusselt number increases.

Keywords: transient natural convection, trapezoidal cavity, three-dimensional flow, entropy generation, second law

Procedia PDF Downloads 333
303 On the Fourth-Order Hybrid Beta Polynomial Kernels in Kernel Density Estimation

Authors: Benson Ade Eniola Afere

Abstract:

This paper introduces a family of fourth-order hybrid beta polynomial kernels developed for statistical analysis. The assessment of these kernels' performance centers on two critical metrics: asymptotic mean integrated squared error (AMISE) and kernel efficiency. Through the utilization of both simulated and real-world datasets, a comprehensive evaluation was conducted, facilitating a thorough comparison with conventional fourth-order polynomial kernels. The evaluation procedure encompassed the computation of AMISE and efficiency values for both the proposed hybrid kernels and the established classical kernels. The consistently observed trend was the superior performance of the hybrid kernels when compared to their classical counterparts. This trend persisted across diverse datasets, underscoring the resilience and efficacy of the hybrid approach. By leveraging these performance metrics and conducting evaluations on both simulated and real-world data, this study furnishes compelling evidence in favour of the superiority of the proposed hybrid beta polynomial kernels. The discernible enhancement in performance, as indicated by lower AMISE values and higher efficiency scores, strongly suggests that the proposed kernels offer heightened suitability for statistical analysis tasks when compared to traditional kernels.

Keywords: AMISE, efficiency, fourth-order Kernels, hybrid Kernels, Kernel density estimation

Procedia PDF Downloads 58
302 Novel Bioinspired Design to Capture Smoky CO2 by Reactive Absorption with Aqueous Scrubber

Authors: J. E. O. Hernandez

Abstract:

In the next 20 years, energy production by burning fuels will increase and so will the atmospheric concentration of CO2 and its well-known threats to life on Earth. The technologies available for capturing CO2 are still dubious and this keeps fostering an interest in bio-inspired approaches. The leading one is the application of carbonic anhydrase (CA) –a superfast biocatalyst able to convert up to one million molecules of CO2 into carbonates in water. However, natural CA underperforms when applied to real smoky CO2 in chimneys and, so far, the efforts to create superior CAs in the lab rely on screening methods running under pristine conditions at the micro level, which are far from resembling those in chimneys. For the evolution of man-made enzymes, selection rather than screening would be ideal but this is challenging because of the need for a suitable artificial environment that is also sustainable for our society. Herein we present the stepwise design and construction of a bioprocess (from bench-scale to semi-pilot) for evolutionary selection experiments. In this bioprocess, reaction and adsorption took place simultaneously at atmospheric pressure in a spray tower. The scrubbing solution was fed countercurrently by reusing municipal pressure and it was mainly prepared with water, carbonic anhydrase and calcium chloride. This bioprocess allowed for the enzymatic carbonation of smoky CO2; the reuse of process water and the recovery of solid carbonates without cooling of smoke, pretreatments, solvent amines and compression of CO2. The average yield of solid carbonates was 0.54 g min-1 or 12-fold the amount produced in serum bottles at lab bench scale. This bioprocess could be used as a tailor-made environment for driving the selection of superior CAs. The bioprocess and its match CA could be sustainably used to reduce global warming by CO2 emissions from exhausts.

Keywords: biological carbon capture and sequestration, carbonic anhydrase, directed evolution, global warming

Procedia PDF Downloads 180
301 Mutational and Evolutionary Analysis of Interleukin-2 Gene in Four Pakistani Goat Breeds

Authors: Tanveer Hussain, Misbah Hussain, Masroor Ellahi Babar, Muhammad Traiq Pervez, Fiaz Hussain, Sana Zahoor, Rashid Saif

Abstract:

Interleukin 2 (IL-2) is a cytokine which is produced by activated T cells, play important role in immune response against antigen. It act in both autocrine and paracrine manner. It can stimulate B cells and various other phagocytic cells like monocytes, lymphokine-activated killer cells and natural killer cells. Acting in autocrine fashion, IL-2 protein plays a crucial role in proliferation of T cells. IL-2 triggers the release of pro and anti- inflammatory cytokines by activating several pathways. In present study, exon 1 of IL-2 gene of four local Pakistani breeds (Dera Din Panah, Beetal, Nachi and Kamori) from two provinces was amplified by using reported Ovine IL-2 primers, yielding PCR product of 501 bp. The sequencing of all samples was done to identify the polymorphisms in amplified region of IL-2 gene. Analysis of sequencing data resulted in identification of one novel nucleotide substitution (T→A) in amplified non-coding region of IL-2 gene. Comparison of IL-2 gene sequence of all four breeds with other goat breeds showed high similarity in sequence. While phylogenetic analysis of our local breeds with other mammals showed that IL-2 is a variable gene which has undergone many substitutions. This high substitution rate can be due to the decreased or increased changed selective pressure. These rapid changes can also lead to the change in function of immune system. This pioneering study of Pakistani goat breeds urge for further studies on immune system of each targeted breed for fully understanding the functional role of IL-2 in goat immunity.

Keywords: interleukin 2, mutational analysis, phylogeny, goat breeds, Pakistan

Procedia PDF Downloads 590
300 Tensor Deep Stacking Neural Networks and Bilinear Mapping Based Speech Emotion Classification Using Facial Electromyography

Authors: P. S. Jagadeesh Kumar, Yang Yung, Wenli Hu

Abstract:

Speech emotion classification is a dominant research field in finding a sturdy and profligate classifier appropriate for different real-life applications. This effort accentuates on classifying different emotions from speech signal quarried from the features related to pitch, formants, energy contours, jitter, shimmer, spectral, perceptual and temporal features. Tensor deep stacking neural networks were supported to examine the factors that influence the classification success rate. Facial electromyography signals were composed of several forms of focuses in a controlled atmosphere by means of audio-visual stimuli. Proficient facial electromyography signals were pre-processed using moving average filter, and a set of arithmetical features were excavated. Extracted features were mapped into consistent emotions using bilinear mapping. With facial electromyography signals, a database comprising diverse emotions will be exposed with a suitable fine-tuning of features and training data. A success rate of 92% can be attained deprived of increasing the system connivance and the computation time for sorting diverse emotional states.

Keywords: speech emotion classification, tensor deep stacking neural networks, facial electromyography, bilinear mapping, audio-visual stimuli

Procedia PDF Downloads 229
299 A Multilayer Perceptron Neural Network Model Optimized by Genetic Algorithm for Significant Wave Height Prediction

Authors: Luis C. Parra

Abstract:

The significant wave height prediction is an issue of great interest in the field of coastal activities because of the non-linear behavior of the wave height and its complexity of prediction. This study aims to present a machine learning model to forecast the significant wave height of the oceanographic wave measuring buoys anchored at Mooloolaba of the Queensland Government Data. Modeling was performed by a multilayer perceptron neural network-genetic algorithm (GA-MLP), considering Relu(x) as the activation function of the MLPNN. The GA is in charge of optimized the MLPNN hyperparameters (learning rate, hidden layers, neurons, and activation functions) and wrapper feature selection for the window width size. Results are assessed using Mean Square Error (MSE), Root Mean Square Error (RMSE), and Mean Absolute Error (MAE). The GAMLPNN algorithm was performed with a population size of thirty individuals for eight generations for the prediction optimization of 5 steps forward, obtaining a performance evaluation of 0.00104 MSE, 0.03222 RMSE, 0.02338 MAE, and 0.71163% of MAPE. The results of the analysis suggest that the MLPNNGA model is effective in predicting significant wave height in a one-step forecast with distant time windows, presenting 0.00014 MSE, 0.01180 RMSE, 0.00912 MAE, and 0.52500% of MAPE with 0.99940 of correlation factor. The GA-MLP algorithm was compared with the ARIMA forecasting model, presenting better performance criteria in all performance criteria, validating the potential of this algorithm.

Keywords: significant wave height, machine learning optimization, multilayer perceptron neural networks, evolutionary algorithms

Procedia PDF Downloads 89
298 The Use of Mobile Phone as Enhancement to Mark Multiple Choice Objectives English Grammar and Literature Examination: An Exploratory Case Study of Preliminary National Diploma Students, Abdu Gusau Polytechnic, Talata Mafara, Zamfara State, Nigeria

Authors: T. Abdulkadir

Abstract:

Most often, marking and assessment of multiple choice kinds of examinations have been opined by many as a cumbersome and herculean task to accomplished manually in Nigeria. Usually this may be in obvious nexus to the fact that mass numbers of candidates were known to take the same examination simultaneously. Eventually, marking such a mammoth number of booklets dared and dread even the fastest paid examiners who often undertake the job with the resulting consequences of stress and boredom. This paper explores the evolution, as well as the set aim to envision and transcend marking the Multiple Choice Objectives- type examination into a thing of creative recreation, or perhaps a more relaxing activity via the use of the mobile phone. A more “pragmatic” dimension method was employed to achieve this work, rather than the formal “in-depth research” based approach due to the “novelty” of the mobile-smartphone e-Marking Scheme discovery. Moreover, being an evolutionary scheme, no recent academic work shares a direct same topic concept with the ‘use of cell phone as an e-marking technique’ was found online; thus, the dearth of even miscellaneous citations in this work. Additional future advancements are what steered the anticipatory motive of this paper which laid the fundamental proposition. However, the paper introduces for the first time the concept of mobile-smart phone e-marking, the steps to achieve it, as well as the merits and demerits of the technique all spelt out in the subsequent pages.

Keywords: cell phone, e-marking scheme (eMS), mobile phone, mobile-smart phone, multiple choice objectives (MCO), smartphone

Procedia PDF Downloads 228
297 Computational Prediction of the Effect of S477N Mutation on the RBD Binding Affinity and Structural Characteristic, A Molecular Dynamics Study

Authors: Mohammad Hossein Modarressi, Mozhgan Mondeali, Khabat Barkhordari, Ali Etemadi

Abstract:

The COVID-19 pandemic, caused by SARS-CoV-2, has led to significant concerns worldwide due to its catastrophic effects on public health. The SARS-CoV-2 infection is initiated with the binding of the receptor-binding domain (RBD) in its spike protein to the ACE2 receptor in the host cell membrane. Due to the error-prone entity of the viral RNA-dependent polymerase complex, the virus genome, including the coding region for the RBD, acquires new mutations, leading to the appearance of multiple variants. These variants can potentially impact transmission, virulence, antigenicity and evasive immune properties. S477N mutation located in the RBD has been observed in the SARS-CoV-2 omicron (B.1.1. 529) variant. In this study, we investigated the consequences of S477N mutation at the molecular level using computational approaches such as molecular dynamics simulation, protein-protein interaction analysis, immunoinformatics and free energy computation. We showed that displacement of Ser with Asn increases the stability of the spike protein and its affinity to ACE2 and thus increases the transmission potential of the virus. This mutation changes the folding and secondary structure of the spike protein. Also, it reduces antibody neutralization, raising concern about re-infection, vaccine breakthrough and therapeutic values.

Keywords: S477N, COVID-19, molecular dynamic, SARS-COV2 mutations

Procedia PDF Downloads 152
296 Increasing Photosynthetic H2 Production by in vivo Expression of Re-Engineered Ferredoxin-Hydrogenase Fusion Protein in the Green Alga Chlamydomonas reinhardtii

Authors: Dake Xiong, Ben Hankamer, Ian Ross

Abstract:

The most urgent challenge of our time is to replace the depleting resources of fossil fuels by sustainable environmentally friendly alternatives. Hydrogen is a promising CO2-neutral fuel for a more sustainable future especially when produced photo-biologically. Hydrogen can be photosynthetically produced in unicellular green alga like Chlamydomonas reinhardtii, catalysed by the inducible highly active and bidirectional [FeFe]-hydrogenase enzymes (HydA). However, evolutionary and physiological constraints severely restrict the hydrogen yield of algae for industrial scale-up, mainly due to its competition among other metabolic pathways on photosynthetic electrons. Among them, a major challenge to be resolved is the inferior competitiveness of hydrogen production (catalysed by HydA) with NADPH production (catalysed by ferredoxin-NADP+-reductase (FNR)), which is essential for cell growth and takes up ~95% of photosynthetic electrons. In this work, the in vivo hydrogen production efficiency of mutants with ferredoxin-hydrogenase (Fd*-HydA1*) fusion protein construct, where the electron donor ferredoxin (Fd*) is fused to HydA1* and expressed in the model organism C. reinhardtii was investigated. Once Fd*-HydA1* fusion gene is expressed in algal cells, the fusion enzyme is able to draw the redistributed photosynthetic electrons and use them for efficient hydrogen production. From preliminary data, mutants with Fd*-HydA1* transgene showed a ~2-fold increase in the photosynthetic hydrogen production rate compared with its parental strain, which only possesses the native HydA in vivo. Therefore, a solid method of having more efficient hydrogen production in microalgae can be achieved through the expression of the synthetic enzymes.

Keywords: Chlamydomonas reinhardtii, ferredoxin, fusion protein, hydrogen production, hydrogenase

Procedia PDF Downloads 242
295 Numerical Analysis of a Pilot Solar Chimney Power Plant

Authors: Ehsan Gholamalizadeh, Jae Dong Chung

Abstract:

Solar chimney power plant is a feasible solar thermal system which produces electricity from the Sun. The objective of this study is to investigate buoyancy-driven flow and heat transfer through a built pilot solar chimney system called 'Kerman Project'. The system has a chimney with the height and diameter of 60 m and 3 m, respectively, and the average radius of its solar collector is about 20 m, and also its average collector height is about 2 m. A three-dimensional simulation was conducted to analyze the system, using computational fluid dynamics (CFD). In this model, radiative transfer equation was solved using the discrete ordinates (DO) radiation model taking into account a non-gray radiation behavior. In order to modelling solar irradiation from the sun’s rays, the solar ray tracing algorithm was coupled to the computation via a source term in the energy equation. The model was validated with comparing to the experimental data of the Manzanares prototype and also the performance of the built pilot system. Then, based on the numerical simulations, velocity and temperature distributions through the system, the temperature profile of the ground surface and the system performance were presented. The analysis accurately shows the flow and heat transfer characteristics through the pilot system and predicts its performance.

Keywords: buoyancy-driven flow, computational fluid dynamics, heat transfer, renewable energy, solar chimney power plant

Procedia PDF Downloads 240
294 Designing a Cricket Team Selection Method Using Super-Efficient DEA and Semi Variance Approach

Authors: Arnab Adhikari, Adrija Majumdar, Gaurav Gupta, Arnab Bisi

Abstract:

Team formation plays an instrumental role in the sports like cricket. Existing literature reveals that most of the works on player selection focus only on the players’ efficiency and ignore the consistency. It motivates us to design an improved player selection method based on both player’s efficiency and consistency. To measure the players’ efficiency measurement, we employ a modified data envelopment analysis (DEA) technique namely ‘super-efficient DEA model’. We design a modified consistency index based on semi variance approach. Here, we introduce a new parameter called ‘fitness index’ for consistency computation to assess a player’s fitness level. Finally, we devise a single performance score using both efficiency score and consistency score with the help of a linear programming model. To test the robustness of our method, we perform a rigorous numerical analysis to determine the all-time best One Day International (ODI) Cricket XI. Next, we conduct extensive comparative studies regarding efficiency scores, consistency scores, selected team between the existing methods and the proposed method and explain the rationale behind the improvement.

Keywords: decision support systems, sports, super-efficient data envelopment analysis, semi variance approach

Procedia PDF Downloads 379
293 Continuous Measurement of Spatial Exposure Based on Visual Perception in Three-Dimensional Space

Authors: Nanjiang Chen

Abstract:

In the backdrop of expanding urban landscapes, accurately assessing spatial openness is critical. Traditional visibility analysis methods grapple with discretization errors and inefficiencies, creating a gap in truly capturing the human experi-ence of space. Addressing these gaps, this paper introduces a distinct continuous visibility algorithm, a leap in measuring urban spaces from a human-centric per-spective. This study presents a methodological breakthrough by applying this algorithm to urban visibility analysis. Unlike conventional approaches, this tech-nique allows for a continuous range of visibility assessment, closely mirroring hu-man visual perception. By eliminating the need for predefined subdivisions in ray casting, it offers a more accurate and efficient tool for urban planners and architects. The proposed algorithm not only reduces computational errors but also demonstrates faster processing capabilities, validated through a case study in Bei-jing's urban setting. Its key distinction lies in its potential to benefit a broad spec-trum of stakeholders, ranging from urban developers to public policymakers, aid-ing in the creation of urban spaces that prioritize visual openness and quality of life. This advancement in urban analysis methods could lead to more inclusive, comfortable, and well-integrated urban environments, enhancing the spatial experience for communities worldwide.

Keywords: visual openness, spatial continuity, ray-tracing algorithms, urban computation

Procedia PDF Downloads 21
292 Principle Component Analysis on Colon Cancer Detection

Authors: N. K. Caecar Pratiwi, Yunendah Nur Fuadah, Rita Magdalena, R. D. Atmaja, Sofia Saidah, Ocky Tiaramukti

Abstract:

Colon cancer or colorectal cancer is a type of cancer that attacks the last part of the human digestive system. Lymphoma and carcinoma are types of cancer that attack human’s colon. Colon cancer causes deaths about half a million people every year. In Indonesia, colon cancer is the third largest cancer case for women and second in men. Unhealthy lifestyles such as minimum consumption of fiber, rarely exercising and lack of awareness for early detection are factors that cause high cases of colon cancer. The aim of this project is to produce a system that can detect and classify images into type of colon cancer lymphoma, carcinoma, or normal. The designed system used 198 data colon cancer tissue pathology, consist of 66 images for Lymphoma cancer, 66 images for carcinoma cancer and 66 for normal / healthy colon condition. This system will classify colon cancer starting from image preprocessing, feature extraction using Principal Component Analysis (PCA) and classification using K-Nearest Neighbor (K-NN) method. Several stages in preprocessing are resize, convert RGB image to grayscale, edge detection and last, histogram equalization. Tests will be done by trying some K-NN input parameter setting. The result of this project is an image processing system that can detect and classify the type of colon cancer with high accuracy and low computation time.

Keywords: carcinoma, colorectal cancer, k-nearest neighbor, lymphoma, principle component analysis

Procedia PDF Downloads 191
291 Fuzzy Inference-Assisted Saliency-Aware Convolution Neural Networks for Multi-View Summarization

Authors: Tanveer Hussain, Khan Muhammad, Amin Ullah, Mi Young Lee, Sung Wook Baik

Abstract:

The Big Data generated from distributed vision sensors installed on large scale in smart cities create hurdles in its efficient and beneficial exploration for browsing, retrieval, and indexing. This paper presents a three-folded framework for effective video summarization of such data and provide a compact and representative format of Big Video Data. In the first fold, the paper acquires input video data from the installed cameras and collect clues such as type and count of objects and clarity of the view from a chunk of pre-defined number of frames of each view. The decision of representative view selection for a particular interval is based on fuzzy inference system, acquiring a precise and human resembling decision, reinforced by the known clues as a part of the second fold. In the third fold, the paper forwards the selected view frames to the summary generation mechanism that is supported by a saliency-aware convolution neural network (CNN) model. The new trend of fuzzy rules for view selection followed by CNN architecture for saliency computation makes the multi-view video summarization (MVS) framework a suitable candidate for real-world practice in smart cities.

Keywords: big video data analysis, fuzzy logic, multi-view video summarization, saliency detection

Procedia PDF Downloads 171