Search results for: real-coded genetic algorithm
2875 SEM Image Classification Using CNN Architectures
Authors: Güzi̇n Ti̇rkeş, Özge Teki̇n, Kerem Kurtuluş, Y. Yekta Yurtseven, Murat Baran
Abstract:
A scanning electron microscope (SEM) is a type of electron microscope mainly used in nanoscience and nanotechnology areas. Automatic image recognition and classification are among the general areas of application concerning SEM. In line with these usages, the present paper proposes a deep learning algorithm that classifies SEM images into nine categories by means of an online application to simplify the process. The NFFA-EUROPE - 100% SEM data set, containing approximately 21,000 images, was used to train and test the algorithm at 80% and 20%, respectively. Validation was carried out using a separate data set obtained from the Middle East Technical University (METU) in Turkey. To increase the accuracy in the results, the Inception ResNet-V2 model was used in view of the Fine-Tuning approach. By using a confusion matrix, it was observed that the coated-surface category has a negative effect on the accuracy of the results since it contains other categories in the data set, thereby confusing the model when detecting category-specific patterns. For this reason, the coated-surface category was removed from the train data set, hence increasing accuracy by up to 96.5%.Keywords: convolutional neural networks, deep learning, image classification, scanning electron microscope
Procedia PDF Downloads 1272874 Quality of Service Based Routing Algorithm for Real Time Applications in MANETs Using Ant Colony and Fuzzy Logic
Authors: Farahnaz Karami
Abstract:
Routing is an important, challenging task in mobile ad hoc networks due to node mobility, lack of central control, unstable links, and limited resources. An ant colony has been found to be an attractive technique for routing in Mobile Ad Hoc Networks (MANETs). However, existing swarm intelligence based routing protocols find an optimal path by considering only one or two route selection metrics without considering correlations among such parameters making them unsuitable lonely for routing real time applications. Fuzzy logic combines multiple route selection parameters containing uncertain information or imprecise data in nature, but does not have multipath routing property naturally in order to provide load balancing. The objective of this paper is to design a routing algorithm using fuzzy logic and ant colony that can solve some of routing problems in mobile ad hoc networks, such as nodes energy consumption optimization to increase network lifetime, link failures rate reduction to increase packet delivery reliability and providing load balancing to optimize available bandwidth. In proposed algorithm, the path information will be given to fuzzy inference system by ants. Based on the available path information and considering the parameters required for quality of service (QoS), the fuzzy cost of each path is calculated and the optimal paths will be selected. NS2.35 simulation tools are used for simulation and the results are compared and evaluated with the newest QoS based algorithms in MANETs according to packet delivery ratio, end-to-end delay and routing overhead ratio criterions. The simulation results show significant improvement in the performance of these networks in terms of decreasing end-to-end delay, and routing overhead ratio, and also increasing packet delivery ratio.Keywords: mobile ad hoc networks, routing, quality of service, ant colony, fuzzy logic
Procedia PDF Downloads 662873 A Monolithic Arbitrary Lagrangian-Eulerian Finite Element Strategy for Partly Submerged Solid in Incompressible Fluid with Mortar Method for Modeling the Contact Surface
Authors: Suman Dutta, Manish Agrawal, C. S. Jog
Abstract:
Accurate computation of hydrodynamic forces on floating structures and their deformation finds application in the ocean and naval engineering and wave energy harvesting. This manuscript presents a monolithic, finite element strategy for fluid-structure interaction involving hyper-elastic solids partly submerged in an incompressible fluid. A velocity-based Arbitrary Lagrangian-Eulerian (ALE) formulation has been used for the fluid and a displacement-based Lagrangian approach has been used for the solid. The flexibility of the ALE technique permits us to treat the free surface of the fluid as a Lagrangian entity. At the interface, the continuity of displacement, velocity and traction are enforced using the mortar method. In the mortar method, the constraints are enforced in a weak sense using the Lagrange multiplier method. In the literature, the mortar method has been shown to be robust in solving various contact mechanics problems. The time-stepping strategy used in this work reduces to the generalized trapezoidal rule in the Eulerian setting. In the Lagrangian limit, in the absence of external load, the algorithm conserves the linear and angular momentum and the total energy of the system. The use of monolithic coupling with an energy-conserving time-stepping strategy gives an unconditionally stable algorithm and allows the user to take large time steps. All the governing equations and boundary conditions have been mapped to the reference configuration. The use of the exact tangent stiffness matrix ensures that the algorithm converges quadratically within each time step. The robustness and good performance of the proposed method are demonstrated by solving benchmark problems from the literature.Keywords: ALE, floating body, fluid-structure interaction, monolithic, mortar method
Procedia PDF Downloads 2772872 Integrating Natural Language Processing (NLP) and Machine Learning in Lung Cancer Diagnosis
Authors: Mehrnaz Mostafavi
Abstract:
The assessment and categorization of incidental lung nodules present a considerable challenge in healthcare, often necessitating resource-intensive multiple computed tomography (CT) scans for growth confirmation. This research addresses this issue by introducing a distinct computational approach leveraging radiomics and deep-learning methods. However, understanding local services is essential before implementing these advancements. With diverse tracking methods in place, there is a need for efficient and accurate identification approaches, especially in the context of managing lung nodules alongside pre-existing cancer scenarios. This study explores the integration of text-based algorithms in medical data curation, indicating their efficacy in conjunction with machine learning and deep-learning models for identifying lung nodules. Combining medical images with text data has demonstrated superior data retrieval compared to using each modality independently. While deep learning and text analysis show potential in detecting previously missed nodules, challenges persist, such as increased false positives. The presented research introduces a Structured-Query-Language (SQL) algorithm designed for identifying pulmonary nodules in a tertiary cancer center, externally validated at another hospital. Leveraging natural language processing (NLP) and machine learning, the algorithm categorizes lung nodule reports based on sentence features, aiming to facilitate research and assess clinical pathways. The hypothesis posits that the algorithm can accurately identify lung nodule CT scans and predict concerning nodule features using machine-learning classifiers. Through a retrospective observational study spanning a decade, CT scan reports were collected, and an algorithm was developed to extract and classify data. Results underscore the complexity of lung nodule cohorts in cancer centers, emphasizing the importance of careful evaluation before assuming a metastatic origin. The SQL and NLP algorithms demonstrated high accuracy in identifying lung nodule sentences, indicating potential for local service evaluation and research dataset creation. Machine-learning models exhibited strong accuracy in predicting concerning changes in lung nodule scan reports. While limitations include variability in disease group attribution, the potential for correlation rather than causality in clinical findings, and the need for further external validation, the algorithm's accuracy and potential to support clinical decision-making and healthcare automation represent a significant stride in lung nodule management and research.Keywords: lung cancer diagnosis, structured-query-language (SQL), natural language processing (NLP), machine learning, CT scans
Procedia PDF Downloads 1062871 Hyperspectral Data Classification Algorithm Based on the Deep Belief and Self-Organizing Neural Network
Authors: Li Qingjian, Li Ke, He Chun, Huang Yong
Abstract:
In this paper, the method of combining the Pohl Seidman's deep belief network with the self-organizing neural network is proposed to classify the target. This method is mainly aimed at the high nonlinearity of the hyperspectral image, the high sample dimension and the difficulty in designing the classifier. The main feature of original data is extracted by deep belief network. In the process of extracting features, adding known labels samples to fine tune the network, enriching the main characteristics. Then, the extracted feature vectors are classified into the self-organizing neural network. This method can effectively reduce the dimensions of data in the spectrum dimension in the preservation of large amounts of raw data information, to solve the traditional clustering and the long training time when labeled samples less deep learning algorithm for training problems, improve the classification accuracy and robustness. Through the data simulation, the results show that the proposed network structure can get a higher classification precision in the case of a small number of known label samples.Keywords: DBN, SOM, pattern classification, hyperspectral, data compression
Procedia PDF Downloads 3432870 Semi-Supervised Hierarchical Clustering Given a Reference Tree of Labeled Documents
Authors: Ying Zhao, Xingyan Bin
Abstract:
Semi-supervised clustering algorithms have been shown effective to improve clustering process with even limited supervision. However, semi-supervised hierarchical clustering remains challenging due to the complexities of expressing constraints for agglomerative clustering algorithms. This paper proposes novel semi-supervised agglomerative clustering algorithms to build a hierarchy based on a known reference tree. We prove that by enforcing distance constraints defined by a reference tree during the process of hierarchical clustering, the resultant tree is guaranteed to be consistent with the reference tree. We also propose a framework that allows the hierarchical tree generation be aware of levels of levels of the agglomerative tree under creation, so that metric weights can be learned and adopted at each level in a recursive fashion. The experimental evaluation shows that the additional cost of our contraint-based semi-supervised hierarchical clustering algorithm (HAC) is negligible, and our combined semi-supervised HAC algorithm outperforms the state-of-the-art algorithms on real-world datasets. The experiments also show that our proposed methods can improve clustering performance even with a small number of unevenly distributed labeled data.Keywords: semi-supervised clustering, hierarchical agglomerative clustering, reference trees, distance constraints
Procedia PDF Downloads 5512869 A Fuzzy Multiobjective Model for Bed Allocation Optimized by Artificial Bee Colony Algorithm
Authors: Jalal Abdulkareem Sultan, Abdulhakeem Luqman Hasan
Abstract:
With the development of health care systems competition, hospitals face more and more pressures. Meanwhile, resource allocation has a vital effect on achieving competitive advantages in hospitals. Selecting the appropriate number of beds is one of the most important sections in hospital management. However, in real situation, bed allocation selection is a multiple objective problem about different items with vagueness and randomness of the data. It is very complex. Hence, research about bed allocation problem is relatively scarce under considering multiple departments, nursing hours, and stochastic information about arrival and service of patients. In this paper, we develop a fuzzy multiobjective bed allocation model for overcoming uncertainty and multiple departments. Fuzzy objectives and weights are simultaneously applied to help the managers to select the suitable beds about different departments. The proposed model is solved by using Artificial Bee Colony (ABC), which is a very effective algorithm. The paper describes an application of the model, dealing with a public hospital in Iraq. The results related that fuzzy multi-objective model was presented suitable framework for bed allocation and optimum use.Keywords: bed allocation problem, fuzzy logic, artificial bee colony, multi-objective optimization
Procedia PDF Downloads 3292868 Design and Field Programmable Gate Array Implementation of Radio Frequency Identification for Boosting up Tag Data Processing
Authors: G. Rajeshwari, V. D. M. Jabez Daniel
Abstract:
Radio Frequency Identification systems are used for automated identification in various applications such as automobiles, health care and security. It is also called as the automated data collection technology. RFID readers are placed in any area to scan large number of tags to cover a wide distance. The placement of the RFID elements may result in several types of collisions. A major challenge in RFID system is collision avoidance. In the previous works the collision was avoided by using algorithms such as ALOHA and tree algorithm. This work proposes collision reduction and increased throughput through reading enhancement method with tree algorithm. The reading enhancement is done by improving interrogation procedure and increasing the data handling capacity of RFID reader with parallel processing. The work is simulated using Xilinx ISE 14.5 verilog language. By implementing this in the RFID system, we can able to achieve high throughput and avoid collision in the reader at a same instant of time. The overall system efficiency will be increased by implementing this.Keywords: antenna, anti-collision protocols, data management system, reader, reading enhancement, tag
Procedia PDF Downloads 3072867 The Pigeon Circovirus Evolution and Epidemiology under Conditions of One Loft Race Rearing System: The Preliminary Results
Authors: Tomasz Stenzel, Daria Dziewulska, Ewa Łukaszuk, Joy Custer, Simona Kraberger, Arvind Varsani
Abstract:
Viral diseases, especially those leading to impairment of the immune system, are among the most important problems in avian pathology. However, there is not much data available on this subject other than commercial poultry bird species. Recently, increasing attention has been paid to racing pigeons, which have been refined for many years in terms of their ability to return to their place of origin. Currently, these birds are used for races at distances from 100 to 1000 km, and winning pigeons are highly valuable. The rearing system of racing pigeons contradicts the principles of biosecurity, as birds originating from various breeding facilities are commonly transported and reared in “One Loft Race” (OLR) facilities. This favors the spread of multiple infections and provides conditions for the development of novel variants of various pathogens through recombination. One of the most significant viruses occurring in this avian species is the pigeon circovirus (PiCV), which is detected in ca. 70% of pigeons. Circoviruses are characterized by vast genetic diversity which is due to, among other things, the recombination phenomenon. It consists of an exchange of fragments of genetic material among various strains of the virus during the infection of one organism. The rate and intensity of the development of PiCV recombinants have not been determined so far. For this reason, an experiment was performed to investigate the frequency of development of novel PiCV recombinants in racing pigeons kept in OLR-type conditions. 15 racing pigeons originating from 5 different breeding facilities, subclinically infected with various PiCV strains, were housed in one room for eight weeks, which was supposed to mimic the conditions of OLR rearing. Blood and swab samples were collected from birds every seven days to recover complete PiCV genomes that were amplified through Rolling Circle Amplification (RCA), cloned, sequenced, and subjected to bioinformatic analyses aimed at determining the genetic diversity and the dynamics of recombination phenomenon among the viruses. In addition, virus shedding rate/level of viremia, expression of the IFN-γ and interferon-related genes, and anti-PiCV antibodies were determined to enable the complete analysis of the course of infection in the flock. Initial results have shown that 336 full PiCV genomes were obtained, exhibiting nucleotide similarity ranging from 86.6 to 100%, and 8 of those were recombinants originating from viruses of different lofts of origin. The first recombinant appeared after seven days of experiment, but most of the recombinants appeared after 14 and 21 days of joint housing. The level of viremia and virus shedding was the highest in the 2nd week of the experiment and gradually decreased to the end of the experiment, which partially corresponded with Mx 1 gene expression and antibody dynamics. The results have shown that the OLR pigeon-rearing system could play a significant role in spreading infectious agents such as circoviruses and contributing to PiCV evolution through recombination. Therefore, it is worth considering whether a popular gambling game such as pigeon racing is sensible from both animal welfare and epidemiological point of view.Keywords: pigeon circovirus, recombination, evolution, one loft race
Procedia PDF Downloads 742866 2D Hexagonal Cellular Automata: The Complexity of Forms
Authors: Vural Erdogan
Abstract:
We created two-dimensional hexagonal cellular automata to obtain complexity by using simple rules same as Conway’s game of life. Considering the game of life rules, Wolfram's works about life-like structures and John von Neumann's self-replication, self-maintenance, self-reproduction problems, we developed 2-states and 3-states hexagonal growing algorithms that reach large populations through random initial states. Unlike the game of life, we used six neighbourhoods cellular automata instead of eight or four neighbourhoods. First simulations explained that whether we are able to obtain sort of oscillators, blinkers, and gliders. Inspired by Wolfram's 1D cellular automata complexity and life-like structures, we simulated 2D synchronous, discrete, deterministic cellular automata to reach life-like forms with 2-states cells. The life-like formations and the oscillators have been explained how they contribute to initiating self-maintenance together with self-reproduction and self-replication. After comparing simulation results, we decided to develop the algorithm for another step. Appending a new state to the same algorithm, which we used for reaching life-like structures, led us to experiment new branching and fractal forms. All these studies tried to demonstrate that complex life forms might come from uncomplicated rules.Keywords: hexagonal cellular automata, self-replication, self-reproduction, self- maintenance
Procedia PDF Downloads 1552865 Framework for Detecting External Plagiarism from Monolingual Documents: Use of Shallow NLP and N-Gram Frequency Comparison
Authors: Saugata Bose, Ritambhra Korpal
Abstract:
The internet has increased the copy-paste scenarios amongst students as well as amongst researchers leading to different levels of plagiarized documents. For this reason, much of research is focused on for detecting plagiarism automatically. In this paper, an initiative is discussed where Natural Language Processing (NLP) techniques as well as supervised machine learning algorithms have been combined to detect plagiarized texts. Here, the major emphasis is on to construct a framework which detects external plagiarism from monolingual texts successfully. For successfully detecting the plagiarism, n-gram frequency comparison approach has been implemented to construct the model framework. The framework is based on 120 characteristics which have been extracted during pre-processing the documents using NLP approach. Afterwards, filter metrics has been applied to select most relevant characteristics and then supervised classification learning algorithm has been used to classify the documents in four levels of plagiarism. Confusion matrix was built to estimate the false positives and false negatives. Our plagiarism framework achieved a very high the accuracy score.Keywords: lexical matching, shallow NLP, supervised machine learning algorithm, word n-gram
Procedia PDF Downloads 3602864 Cost Sensitive Feature Selection in Decision-Theoretic Rough Set Models for Customer Churn Prediction: The Case of Telecommunication Sector Customers
Authors: Emel Kızılkaya Aydogan, Mihrimah Ozmen, Yılmaz Delice
Abstract:
In recent days, there is a change and the ongoing development of the telecommunications sector in the global market. In this sector, churn analysis techniques are commonly used for analysing why some customers terminate their service subscriptions prematurely. In addition, customer churn is utmost significant in this sector since it causes to important business loss. Many companies make various researches in order to prevent losses while increasing customer loyalty. Although a large quantity of accumulated data is available in this sector, their usefulness is limited by data quality and relevance. In this paper, a cost-sensitive feature selection framework is developed aiming to obtain the feature reducts to predict customer churn. The framework is a cost based optional pre-processing stage to remove redundant features for churn management. In addition, this cost-based feature selection algorithm is applied in a telecommunication company in Turkey and the results obtained with this algorithm.Keywords: churn prediction, data mining, decision-theoretic rough set, feature selection
Procedia PDF Downloads 4492863 Application of Random Forest Model in The Prediction of River Water Quality
Authors: Turuganti Venkateswarlu, Jagadeesh Anmala
Abstract:
Excessive runoffs from various non-point source land uses, and other point sources are rapidly contaminating the water quality of streams in the Upper Green River watershed, Kentucky, USA. It is essential to maintain the stream water quality as the river basin is one of the major freshwater sources in this province. It is also important to understand the water quality parameters (WQPs) quantitatively and qualitatively along with their important features as stream water is sensitive to climatic events and land-use practices. In this paper, a model was developed for predicting one of the significant WQPs, Fecal Coliform (FC) from precipitation, temperature, urban land use factor (ULUF), agricultural land use factor (ALUF), and forest land-use factor (FLUF) using Random Forest (RF) algorithm. The RF model, a novel ensemble learning algorithm, can even find out advanced feature importance characteristics from the given model inputs for different combinations. This model’s outcomes showed a good correlation between FC and climate events and land use factors (R2 = 0.94) and precipitation and temperature are the primary influencing factors for FC.Keywords: water quality, land use factors, random forest, fecal coliform
Procedia PDF Downloads 2002862 Association of a Genetic Polymorphism in Cytochrome P450, Family 1 with Risk of Developing Esophagus Squamous Cell Carcinoma
Authors: Soodabeh Shahid Sales, Azam Rastgar Moghadam, Mehrane Mehramiz, Malihe Entezari, Kazem Anvari, Mohammad Sadegh Khorrami, Saeideh Ahmadi Simab, Ali Moradi, Seyed Mahdi Hassanian, Majid Ghayour-Mobarhan, Gordon A. Ferns, Amir Avan
Abstract:
Background Esophageal cancer has been reported as the eighth most common cancer universal and the seventh cause of cancer-related death in men .recent studies have revealed that cytochrome P450, family 1, subfamily B, polypeptide 1, which plays a role in metabolizing xenobiotics, is associated with different cancers. Therefore in the present study, we investigated the impact of CYP1B1-rs1056836 on esophagus squamous cell carcinoma (ESCC) patients. Method: 317 subjects, with and without ESCC were recruited. DNA was extracted and genotyped via Real-time PCR-Based Taq Man. Kaplan Meier curves were utilized to assess overall and progression-free survival. To evaluate the relationship between patients clinicopathological data, genotypic frequencies, disease prognosis, and patients survival, Pearson chi-square and t-test were used. Logistic regression was utilized to assess the association between the risk of ESCC and genotypes. Results: the genotypic frequency for GG, GC, and CC are respectively 58.6% , 29.8%, 11.5% in the healthy group and 51.8%, 36.14% and 12% in ESCC group. With respect to the recessive genetic inheritance model, an association between the GG genotype and stage of ESCC were found. Also, statistically significant results were not found for this variation and risk of ESCC. Patients with GG genotype had a decreased risk of nodal metastasis in comparison with patients with CC/CG genotype, although this link was not statistically significant. Conclusion: Our findings illustrated the correlation of CYP1B1-rs1056836 as a potential biomarker for ESCC patients, supporting further studies in larger populations in different ethnic groups. Moreover, further investigations are warranted to evaluate the association of emerging marker with dietary intake and lifestyle.Keywords: Cytochrome P450, esophagus squamous cell carcinoma, dietary intake, lifestyle
Procedia PDF Downloads 2032861 Training of Future Computer Science Teachers Based on Machine Learning Methods
Authors: Meruert Serik, Nassipzhan Duisegaliyeva, Danara Tleumagambetova
Abstract:
The article highlights and describes the characteristic features of real-time face detection in images and videos using machine learning algorithms. Students of educational programs reviewed the research work "6B01511-Computer Science", "7M01511-Computer Science", "7M01525- STEM Education," and "8D01511-Computer Science" of Eurasian National University named after L.N. Gumilyov. As a result, the advantages and disadvantages of Haar Cascade (Haar Cascade OpenCV), HoG SVM (Histogram of Oriented Gradients, Support Vector Machine), and MMOD CNN Dlib (Max-Margin Object Detection, convolutional neural network) detectors used for face detection were determined. Dlib is a general-purpose cross-platform software library written in the programming language C++. It includes detectors used for determining face detection. The Cascade OpenCV algorithm is efficient for fast face detection. The considered work forms the basis for the development of machine learning methods by future computer science teachers.Keywords: algorithm, artificial intelligence, education, machine learning
Procedia PDF Downloads 752860 Sinusoidal Roughness Elements in a Square Cavity
Authors: Muhammad Yousaf, Shoaib Usman
Abstract:
Numerical studies were conducted using Lattice Boltzmann Method (LBM) to study the natural convection in a square cavity in the presence of roughness. An algorithm basedon a single relaxation time Bhatnagar-Gross-Krook (BGK) model of Lattice Boltzmann Method (LBM) was developed. Roughness was introduced on both the hot and cold walls in the form of sinusoidal roughness elements. The study was conducted for a Newtonian fluid of Prandtl number (Pr) 1.0. The range of Ra number was explored from 103 to 106 in a laminar region. Thermal and hydrodynamic behavior of fluid was analyzed using a differentially heated square cavity with roughness elements present on both the hot and cold wall. Neumann boundary conditions were introduced on horizontal walls with vertical walls as isothermal. The roughness elements were at the same boundary condition as corresponding walls. Computational algorithm was validated against previous benchmark studies performed with different numerical methods, and a good agreement was found to exist. Results indicate that the maximum reduction in the average heat transfer was16.66 percent at Ra number 105.Keywords: Lattice Boltzmann method, natural convection, nusselt number, rayleigh number, roughness
Procedia PDF Downloads 5292859 Community Engagement: Experience from the SIREN Study in Sub-Saharan Africa
Authors: Arti Singh, Carolyn Jenkins, Oyedunni S. Arulogun, Mayowa O. Owolabi, Fred S. Sarfo, Bruce Ovbiagele, Enzinne Sylvia
Abstract:
Background: Stroke, the leading cause of adult-onset disability and the second leading cause of death, is a major public health concern particularly pertinent in Sub-Saharan Africa (SSA), where nearly 80% of all global stroke mortalities occur. The Stroke Investigative Research and Education Network (SIREN) seeks to comprehensively characterize the genomic, sociocultural, economic, and behavioral risk factors for stroke and to build effective teams for research to address and decrease the burden of stroke and other non communicable diseases in SSA. One of the first steps to address this goal was to effectively engage the communities that suffer the high burden of disease in SSA. This study describes how the SIREN project engaged six sites in Ghana and Nigeria over the past three years, describing the community engagement activities that have arisen since inception. Aim: The aim of community engagement (CE) within SIREN is to elucidate information about knowledge, attitudes, beliefs, and practices (KABP) about stroke and its risk factors from individuals of African ancestry in SSA, and to educate the community about stroke and ways to decrease disabilities and deaths from stroke using socioculturally appropriate messaging and messengers. Methods: Community Advisory Board (CABs), Focus Group Discussions (FGDs) and community outreach programs. Results: 27 FGDs with 168 participants including community heads, religious leaders, health professionals and individuals with stroke among others, were conducted, and over 60 CE outreaches have been conducted within the SIREN performance sites. Over 5,900 individuals have received education on cardiovascular risk factors and about 5,000 have been screened for cardiovascular risk factors during the outreaches. FGDs and outreach programs indicate that knowledge of stroke, as well as risk factors and follow-up evidence-based care is limited and often late. Other findings include: 1) Most recognize hypertension as a major risk factor for stroke. 2) About 50% report that stroke is hereditary and about 20% do not know organs affected by stroke. 3) More than 95% willing to participate in genetic testing research and about 85% willing to pay for testing and recommend the test to others. 4) Almost all indicated that genetic testing could help health providers better treat stroke and help scientists better understand the causes of stroke. The CABs provided stakeholder input into SIREN activities and facilitated collaborations among investigators, community members and stakeholders. Conclusion: The CE core within SIREN is a first-of-its kind public outreach engagement initiative to evaluate and address perceptions about stroke and genomics by patients, caregivers, and local leaders in SSA and has implications as a model for assessment in other high-stroke risk populations. SIREN’s CE program uses best practices to build capacity for community-engaged research, accelerate integration of research findings into practice and strengthen dynamic community-academic partnerships within our communities. CE has had several major successes over the past three years including our multi-site collaboration examining the KABP about stroke (symptoms, risk factors, burden) and genetic testing across SSA.Keywords: community advisory board, community engagement, focus groups, outreach, SSA, stroke
Procedia PDF Downloads 4322858 Optimization of Traffic Agent Allocation for Minimizing Bus Rapid Transit Cost on Simplified Jakarta Network
Authors: Gloria Patricia Manurung
Abstract:
Jakarta Bus Rapid Transit (BRT) system which was established in 2009 to reduce private vehicle usage and ease the rush hour gridlock throughout the Jakarta Greater area, has failed to achieve its purpose. With gradually increasing the number of private vehicles ownership and reduced road space by the BRT lane construction, private vehicle users intuitively invade the exclusive lane of BRT, creating local traffic along the BRT network. Invaded BRT lanes costs become the same with the road network, making BRT which is supposed to be the main public transportation in the city becoming unreliable. Efforts to guard critical lanes with preventing the invasion by allocating traffic agents at several intersections have been expended, lead to the improving congestion level along the lane. Given a set of number of traffic agents, this study uses an analytical approach to finding the best deployment strategy of traffic agent on a simplified Jakarta road network in minimizing the BRT link cost which is expected to lead to the improvement of BRT system time reliability. User-equilibrium model of traffic assignment is used to reproduce the origin-destination demand flow on the network and the optimum solution conventionally can be obtained with brute force algorithm. This method’s main constraint is that traffic assignment simulation time escalates exponentially with the increase of set of agent’s number and network size. Our proposed metaheuristic and heuristic algorithms perform linear simulation time increase and result in minimized BRT cost approaching to brute force algorithm optimization. Further analysis of the overall network link cost should be performed to see the impact of traffic agent deployment to the network system.Keywords: traffic assignment, user equilibrium, greedy algorithm, optimization
Procedia PDF Downloads 2332857 Intrusion Detection and Prevention System (IDPS) in Cloud Computing Using Anomaly-Based and Signature-Based Detection Techniques
Authors: John Onyima, Ikechukwu Ezepue
Abstract:
Virtualization and cloud computing are among the fast-growing computing innovations in recent times. Organisations all over the world are moving their computing services towards the cloud this is because of its rapid transformation of the organization’s infrastructure and improvement of efficient resource utilization and cost reduction. However, this technology brings new security threats and challenges about safety, reliability and data confidentiality. Evidently, no single security technique can guarantee security or protection against malicious attacks on a cloud computing network hence an integrated model of intrusion detection and prevention system has been proposed. Anomaly-based and signature-based detection techniques will be integrated to enable the network and its host defend themselves with some level of intelligence. The anomaly-base detection was implemented using the local deviation factor graph-based (LDFGB) algorithm while the signature-based detection was implemented using the snort algorithm. Results from this collaborative intrusion detection and prevention techniques show robust and efficient security architecture for cloud computing networks.Keywords: anomaly-based detection, cloud computing, intrusion detection, intrusion prevention, signature-based detection
Procedia PDF Downloads 3092856 An Improved Adaptive Dot-Shape Beamforming Algorithm Research on Frequency Diverse Array
Authors: Yanping Liao, Zenan Wu, Ruigang Zhao
Abstract:
Frequency diverse array (FDA) beamforming is a technology developed in recent years, and its antenna pattern has a unique angle-distance-dependent characteristic. However, the beam is always required to have strong concentration, high resolution and low sidelobe level to form the point-to-point interference in the concentrated set. In order to eliminate the angle-distance coupling of the traditional FDA and to make the beam energy more concentrated, this paper adopts a multi-carrier FDA structure based on proposed power exponential frequency offset to improve the array structure and frequency offset of the traditional FDA. The simulation results show that the beam pattern of the array can form a dot-shape beam with more concentrated energy, and its resolution and sidelobe level performance are improved. However, the covariance matrix of the signal in the traditional adaptive beamforming algorithm is estimated by the finite-time snapshot data. When the number of snapshots is limited, the algorithm has an underestimation problem, which leads to the estimation error of the covariance matrix to cause beam distortion, so that the output pattern cannot form a dot-shape beam. And it also has main lobe deviation and high sidelobe level problems in the case of limited snapshot. Aiming at these problems, an adaptive beamforming technique based on exponential correction for multi-carrier FDA is proposed to improve beamforming robustness. The steps are as follows: first, the beamforming of the multi-carrier FDA is formed under linear constrained minimum variance (LCMV) criteria. Then the eigenvalue decomposition of the covariance matrix is performed to obtain the diagonal matrix composed of the interference subspace, the noise subspace and the corresponding eigenvalues. Finally, the correction index is introduced to exponentially correct the small eigenvalues of the noise subspace, improve the divergence of small eigenvalues in the noise subspace, and improve the performance of beamforming. The theoretical analysis and simulation results show that the proposed algorithm can make the multi-carrier FDA form a dot-shape beam at limited snapshots, reduce the sidelobe level, improve the robustness of beamforming, and have better performance.Keywords: adaptive beamforming, correction index, limited snapshot, multi-carrier frequency diverse array, robust
Procedia PDF Downloads 1322855 Design an Development of an Agorithm for Prioritizing the Test Cases Using Neural Network as Classifier
Authors: Amit Verma, Simranjeet Kaur, Sandeep Kaur
Abstract:
Test Case Prioritization (TCP) has gained wide spread acceptance as it often results in good quality software free from defects. Due to the increase in rate of faults in software traditional techniques for prioritization results in increased cost and time. Main challenge in TCP is difficulty in manually validate the priorities of different test cases due to large size of test suites and no more emphasis are made to make the TCP process automate. The objective of this paper is to detect the priorities of different test cases using an artificial neural network which helps to predict the correct priorities with the help of back propagation algorithm. In our proposed work one such method is implemented in which priorities are assigned to different test cases based on their frequency. After assigning the priorities ANN predicts whether correct priority is assigned to every test case or not otherwise it generates the interrupt when wrong priority is assigned. In order to classify the different priority test cases classifiers are used. Proposed algorithm is very effective as it reduces the complexity with robust efficiency and makes the process automated to prioritize the test cases.Keywords: test case prioritization, classification, artificial neural networks, TF-IDF
Procedia PDF Downloads 4002854 A Biologically Inspired Approach to Automatic Classification of Textile Fabric Prints Based On Both Texture and Colour Information
Authors: Babar Khan, Wang Zhijie
Abstract:
Machine Vision has been playing a significant role in Industrial Automation, to imitate the wide variety of human functions, providing improved safety, reduced labour cost, the elimination of human error and/or subjective judgments, and the creation of timely statistical product data. Despite the intensive research, there have not been any attempts to classify fabric prints based on printed texture and colour, most of the researches so far encompasses only black and white or grey scale images. We proposed a biologically inspired processing architecture to classify fabrics w.r.t. the fabric print texture and colour. We created a texture descriptor based on the HMAX model for machine vision, and incorporated colour descriptor based on opponent colour channels simulating the single opponent and double opponent neuronal function of the brain. We found that our algorithm not only outperformed the original HMAX algorithm on classification of fabric print texture and colour, but we also achieved a recognition accuracy of 85-100% on different colour and different texture fabric.Keywords: automatic classification, texture descriptor, colour descriptor, opponent colour channel
Procedia PDF Downloads 4882853 Comparison of the Isolation Rates and Characteristics of Salmonella Isolated from Antibiotic-Free and Conventional Chicken Meat Samples
Authors: Jin-Hyeong Park, Hong-Seok Kim, Jin-Hyeok Yim, Young-Ji Kim, Dong-Hyeon Kim, Jung-Whan Chon, Kun-Ho Seo
Abstract:
Salmonella contamination in chicken samples can cause major health problems in humans. However, not only the effects of antibiotic treatment during growth but also the impacts of poultry slaughter line on the prevalence of Salmonella in final chicken meat sold to consumers are unknown. In this study, we compared the isolation rates and antimicrobial resistance of Salmonella between antibiotic-free, conventional, conventional Korean native retail chicken meat samples and clonal divergence of Salmonella isolates by multilocus sequence typing. In addition, the distribution of extended-spectrum β-lactamase (ESBL) genes in ESBL-producing Salmonella isolates was analyzed. A total of 72 retail chicken meat samples (n = 24 antibiotic-free broiler [AFB] chickens, n = 24 conventional broiler [CB] chickens, and n = 24 conventional Korean native [CK] chickens) were collected from local retail markets in Seoul, South Korea. The isolation rates of Salmonella were 66.6% in AFB chickens, 45.8% in CB chickens, and 25% in CK chickens. By analyzing the minimum inhibitory concentrations of β -lactam antibiotics with the disc-diffusion test, we found that 81.2% of Salmonella isolates from AFB chickens, 63.6% of isolates from CB chickens, and 50% of isolates from CK chickens were ESBL producers; all ESBL-positive isolates had the CTX-M-15 genotype. Interestingly, all ESBL-producing Salmonella were revealed as ST16 by multilocus sequence typing. In addition, all CTX-M-15-positive isolates had the genetic platform of blaCTX-M gene (IS26-ISEcp1-blaCTX-M-15-IS903), to the best of our knowledge, this is the first report in Salmonella around the world. The Salmonella ST33 strain (S. Hadar) isolated in this study has never been reported in South Korea. In conclusion, our findings showed that antibiotic-free retail chicken meat products were also largely contaminated with ESBL-producing Salmonella and that their ESBL genes and genetic platforms were the same as those isolated from conventional retail chicken meat products.Keywords: antibiotic-free poultry, conventional poultry, multilocus sequence typing, extended-spectrum β-lactamase, antimicrobial resistance
Procedia PDF Downloads 2782852 Sociology of Vis and Ramin
Authors: Farzane Yusef Ghanbari
Abstract:
A sociological analysis on the ancient poetry of Vis and Ramin reveals important points about the political, cultural, and social conditions of the Iranian ancient history. The reciprocal relationship between the effect and structure of society helps the understanding and interpretation of the work. Therefore, informed by the Goldman genetic structuralism and through a glance at social epistemology, this study attempts to explain the role of spell in shaping the social knowledge of ancient people. The results suggest that due to the lack of a central government, and secularism in politics and freedom of speech and opinion, such romantic stories as Vis and Ramin, with a focal female character, has emerged.Keywords: persian literature, Vis and Ramin, sociology, developmental structuralism
Procedia PDF Downloads 4322851 A Mixture Vine Copula Structures Model for Dependence Wind Speed among Wind Farms and Its Application in Reactive Power Optimization
Authors: Yibin Qiu, Yubo Ouyang, Shihan Li, Guorui Zhang, Qi Li, Weirong Chen
Abstract:
This paper aims at exploring the impacts of high dimensional dependencies of wind speed among wind farms on probabilistic optimal power flow. To obtain the reactive power optimization faster and more accurately, a mixture vine Copula structure model combining the K-means clustering, C vine copula and D vine copula is proposed in this paper, through which a more accurate correlation model can be obtained. Moreover, a Modified Backtracking Search Algorithm (MBSA), the three-point estimate method is applied to probabilistic optimal power flow. The validity of the mixture vine copula structure model and the MBSA are respectively tested in IEEE30 node system with measured data of 3 adjacent wind farms in a certain area, and the results indicate effectiveness of these methods.Keywords: mixture vine copula structure model, three-point estimate method, the probability integral transform, modified backtracking search algorithm, reactive power optimization
Procedia PDF Downloads 2502850 Worst-Case Load Shedding in Electric Power Networks
Authors: Fu Lin
Abstract:
We consider the worst-case load-shedding problem in electric power networks where a number of transmission lines are to be taken out of service. The objective is to identify a prespecified number of line outages that lead to the maximum interruption of power generation and load at the transmission level, subject to the active power-flow model, the load and generation capacity of the buses, and the phase-angle limit across the transmission lines. For this nonlinear model with binary constraints, we show that all decision variables are separable except for the nonlinear power-flow equations. We develop an iterative decomposition algorithm, which converts the worst-case load shedding problem into a sequence of small subproblems. We show that the subproblems are either convex problems that can be solved efficiently or nonconvex problems that have closed-form solutions. Consequently, our approach is scalable for large networks. Furthermore, we prove the convergence of our algorithm to a critical point, and the objective value is guaranteed to decrease throughout the iterations. Numerical experiments with IEEE test cases demonstrate the effectiveness of the developed approach.Keywords: load shedding, power system, proximal alternating linearization method, vulnerability analysis
Procedia PDF Downloads 1422849 Impact of Climate Change on Forest Ecosystem Services: In situ Biodiversity Conservation and Sustainable Management of Forest Resources in Tropical Forests
Authors: Rajendra Kumar Pandey
Abstract:
Forest genetic resources not only represent regional biodiversity but also have immense value as the wealth for securing livelihood of poor people. These are vulnerable to ecological due to depletion/deforestation and /or impact of climate change. These resources of various plant categories are vulnerable on the floor of natural tropical forests, and leading to the threat on the growth and development of future forests. More than 170 species, including NTFPs, are in critical condition for their survival in natural tropical forests of Central India. Forest degradation, commensurate with biodiversity loss, is now pervasive, disproportionately affecting the rural poor who directly depend on forests for their subsistence. Looking ahead the interaction between forest and water, soil, precipitation, climate change, etc. and its impact on biodiversity of tropical forests, it is inevitable to develop co-operation policies and programmes to address new emerging realities. Forests ecosystem also known as the 'wealth of poor' providing goods and ecosystem services on a sustainable basis, are now recognized as a stepping stone to move poor people beyond subsistence. Poverty alleviation is the prime objective of the Millennium Development Goals (MDGs). However, environmental sustainability including other MDGs, is essential to ensure successful elimination of poverty and well being of human society. Loss and degradation of ecosystem are the most serious threats to achieving development goals worldwide. Millennium Ecosystem Assessment (MEA, 2005) was an attempt to identify provisioning and regulating cultural and supporting ecosystem services to provide livelihood security of human beings. Climate change may have a substantial impact on ecological structure and function of forests, provisioning, regulations and management of resources which can affect sustainable flow of ecosystem services. To overcome these limitations, policy guidelines with respect to planning and consistent research strategy need to be framed for conservation and sustainable development of forest genetic resources.Keywords: climate change, forest ecosystem services, sustainable forest management, biodiversity conservation
Procedia PDF Downloads 3002848 Improvement of Direct Torque and Flux Control of Dual Stator Induction Motor Drive Using Intelligent Techniques
Authors: Kouzi Katia
Abstract:
This paper proposes a Direct Torque Control (DTC) algorithm of dual Stator Induction Motor (DSIM) drive using two approach intelligent techniques: Artificial Neural Network (ANN) approach replaces the switching table selector block of conventional DTC and Mamdani Fuzzy Logic controller (FLC) is used for stator resistance estimation. The fuzzy estimation method is based on an online stator resistance correction through the variations of stator current estimation error and its variation. The fuzzy logic controller gives the future stator resistance increment at the output. The main advantage of suggested algorithm control is to reduce the hardware complexity of conventional selectors, to avoid the drive instability that may occur in certain situation and ensure the tracking of the actual of the stator resistance. The effectiveness of the technique and the improvement of the whole system performance are proved by results.Keywords: artificial neural network, direct torque control, dual stator induction motor, fuzzy logic estimator, switching table
Procedia PDF Downloads 3462847 Inverse Mode Shape Problem of Hand-Arm Vibration (Humerus Bone) for Bio-Dynamic Response Using Varying Boundary Conditions
Authors: Ajay R, Rammohan B, Sridhar K S S, Gurusharan N
Abstract:
The objective of the work is to develop a numerical method to solve the inverse mode shape problem by determining the cross-sectional area of a structure for the desired mode shape via the vibration response study of the humerus bone, which is in the form of a cantilever beam with anisotropic material properties. The humerus bone is the long bone in the arm that connects the shoulder to the elbow. The mode shape is assumed to be a higher-order polynomial satisfying a prescribed set of boundary conditions to converge the numerical algorithm. The natural frequency and the mode shapes are calculated for different boundary conditions to find the cross-sectional area of humerus bone from Eigenmode shape with the aid of the inverse mode shape algorithm. The cross-sectional area of humerus bone validates the mode shapes of specific boundary conditions. The numerical method to solve the inverse mode shape problem is validated in the biomedical application by finding the cross-sectional area of a humerus bone in the human arm.Keywords: Cross-sectional area, Humerus bone, Inverse mode shape problem, Mode shape
Procedia PDF Downloads 1302846 A Dynamic Ensemble Learning Approach for Online Anomaly Detection in Alibaba Datacenters
Authors: Wanyi Zhu, Xia Ming, Huafeng Wang, Junda Chen, Lu Liu, Jiangwei Jiang, Guohua Liu
Abstract:
Anomaly detection is a first and imperative step needed to respond to unexpected problems and to assure high performance and security in large data center management. This paper presents an online anomaly detection system through an innovative approach of ensemble machine learning and adaptive differentiation algorithms, and applies them to performance data collected from a continuous monitoring system for multi-tier web applications running in Alibaba data centers. We evaluate the effectiveness and efficiency of this algorithm with production traffic data and compare with the traditional anomaly detection approaches such as a static threshold and other deviation-based detection techniques. The experiment results show that our algorithm correctly identifies the unexpected performance variances of any running application, with an acceptable false positive rate. This proposed approach has already been deployed in real-time production environments to enhance the efficiency and stability in daily data center operations.Keywords: Alibaba data centers, anomaly detection, big data computation, dynamic ensemble learning
Procedia PDF Downloads 204