Search results for: fast Fourier algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4632

Search results for: fast Fourier algorithms

1182 Historical Analysis of the Landscape Changes and the Eco-Environment Effects on the Coastal Zone of Bohai Bay, China

Authors: Juan Zhou, Lusan Liu, Yanzhong Zhu, Kuixuan Lin, Wenqian Cai, Yu Wang, Xing Wang

Abstract:

During the past few decades, there has been an increase in the number of coastal land reclamation projects for residential, commercial and industrial purposes in more and more coastal cities of China, which led to the destruction of the wetlands and loss of the sensitive marine habitats. Meanwhile, the influences and nature of these projects attract widespread public and academic concern. For identifying the trend of landscape (esp. Coastal reclamation) and ecological environment changes, understanding of which interacted, and offering a general science for the development of regional plans. In the paper, a case study was carried out in Bohai Bay area, based on the analysis of remote sensing data. Land use maps were created for 1954, 1970, 1981, 1990, 2000 and 2010. Landscape metrics were calculated and illustrated that the degree of reclamation changes was linked to the hydrodynamic environment and macrobenthos community. The results indicated that the worst of the loss of initial areas occurred during 1954-1970, with 65.6% lost mostly to salt field; to 2010, Coastal reclamation area increased more than 200km² as artificial landscape. The numerical simulation of tidal current field in 2003 and 2010 respectively showed that the flow velocity in offshore became faster (from 2-5 cm/s to 10-20 cm/s), and the flow direction seem to go astray. These significant changes of coastline were not conducive to the spread of pollutants and degradation. Additionally, the dominant macrobenthos analysis from 1958 to 2012 showed that Musculus senhousei (Benson, 1842) spread very fast and had been the predominant species in the recent years, which was a disturbance tolerant species.

Keywords: Bohai Bay, coastal reclamation, landscape change, spatial patterns

Procedia PDF Downloads 293
1181 Algorithms for Run-Time Task Mapping in NoC-Based Heterogeneous MPSoCs

Authors: M. K. Benhaoua, A. K. Singh, A. E. Benyamina, P. Boulet

Abstract:

Mapping parallelized tasks of applications onto these MPSoCs can be done either at design time (static) or at run-time (dynamic). Static mapping strategies find the best placement of tasks at design-time, and hence, these are not suitable for dynamic workload and seem incapable of runtime resource management. The number of tasks or applications executing in MPSoC platform can exceed the available resources, requiring efficient run-time mapping strategies to meet these constraints. This paper describes a new Spiral Dynamic Task Mapping heuristic for mapping applications onto NoC-based Heterogeneous MPSoC. This heuristic is based on packing strategy and routing Algorithm proposed also in this paper. Heuristic try to map the tasks of an application in a clustering region to reduce the communication overhead between the communicating tasks. The heuristic proposed in this paper attempts to map the tasks of an application that are most related to each other in a spiral manner and to find the best possible path load that minimizes the communication overhead. In this context, we have realized a simulation environment for experimental evaluations to map applications with varying number of tasks onto an 8x8 NoC-based Heterogeneous MPSoCs platform, we demonstrate that the new mapping heuristics with the new modified dijkstra routing algorithm proposed are capable of reducing the total execution time and energy consumption of applications when compared to state-of-the-art run-time mapping heuristics reported in the literature.

Keywords: multiprocessor system on chip, MPSoC, network on chip, NoC, heterogeneous architectures, run-time mapping heuristics, routing algorithm

Procedia PDF Downloads 490
1180 Electronic Payment Recording with Payment History Retrieval Module: A System Software

Authors: Adrian Forca, Simeon Cainday III

Abstract:

The Electronic Payment Recording with Payment History Retrieval Module is developed intendedly for the College of Science and Technology. This system software innovates the manual process of recording the payments done in the department through the development of electronic payment recording system software shifting from the slow and time-consuming procedure to quick yet reliable and accurate way of recording payments because it immediately generates receipts for every transaction. As an added feature to its software process, generation of recorded payment report is integrated eliminating the manual reporting to a more easy and consolidated report. As an added feature to the system, all recorded payments of the students can be retrieved immediately making the system transparent and reliable payment recording software. Viewing the whole process, the system software will shift from the manual process to an organized software technology because the information will be stored in a logically correct and normalized database. Further, the software will be developed using the modern programming language and implement strict programming methods to validate all users accessing the system, evaluate all data passed into the system and information retrieved to ensure data accuracy and reliability. In addition, the system will identify the user and limit its access privilege to establish boundaries of the specific access to information allowed for the store, modify, and update making the information secure against unauthorized data manipulation. As a result, the System software will eliminate the manual procedure and replace with an innovative modern information technology resulting to the improvement of the whole process of payment recording fast, secure, accurate and reliable software innovations.

Keywords: collection, information system, manual procedure, payment

Procedia PDF Downloads 172
1179 An Ensemble System of Classifiers for Computer-Aided Volcano Monitoring

Authors: Flavio Cannavo

Abstract:

Continuous evaluation of the status of potentially hazardous volcanos plays a key role for civil protection purposes. The importance of monitoring volcanic activity, especially for energetic paroxysms that usually come with tephra emissions, is crucial not only for exposures to the local population but also for airline traffic. Presently, real-time surveillance of most volcanoes worldwide is essentially delegated to one or more human experts in volcanology, who interpret data coming from different kind of monitoring networks. Unfavorably, the high nonlinearity of the complex and coupled volcanic dynamics leads to a large variety of different volcanic behaviors. Moreover, continuously measured parameters (e.g. seismic, deformation, infrasonic and geochemical signals) are often not able to fully explain the ongoing phenomenon, thus making the fast volcano state assessment a very puzzling task for the personnel on duty at the control rooms. With the aim of aiding the personnel on duty in volcano surveillance, here we introduce a system based on an ensemble of data-driven classifiers to infer automatically the ongoing volcano status from all the available different kind of measurements. The system consists of a heterogeneous set of independent classifiers, each one built with its own data and algorithm. Each classifier gives an output about the volcanic status. The ensemble technique allows weighting the single classifier output to combine all the classifications into a single status that maximizes the performance. We tested the model on the Mt. Etna (Italy) case study by considering a long record of multivariate data from 2011 to 2015 and cross-validated it. Results indicate that the proposed model is effective and of great power for decision-making purposes.

Keywords: Bayesian networks, expert system, mount Etna, volcano monitoring

Procedia PDF Downloads 250
1178 A Machine Learning Based Framework for Education Levelling in Multicultural Countries: UAE as a Case Study

Authors: Shatha Ghareeb, Rawaa Al-Jumeily, Thar Baker

Abstract:

In Abu Dhabi, there are many different education curriculums where sector of private schools and quality assurance is supervising many private schools in Abu Dhabi for many nationalities. As there are many different education curriculums in Abu Dhabi to meet expats’ needs, there are different requirements for registration and success. In addition, there are different age groups for starting education in each curriculum. In fact, each curriculum has a different number of years, assessment techniques, reassessment rules, and exam boards. Currently, students that transfer curriculums are not being placed in the right year group due to different start and end dates of each academic year and their date of birth for each year group is different for each curriculum and as a result, we find students that are either younger or older for that year group which therefore creates gaps in their learning and performance. In addition, there is not a way of storing student data throughout their academic journey so that schools can track the student learning process. In this paper, we propose to develop a computational framework applicable in multicultural countries such as UAE in which multi-education systems are implemented. The ultimate goal is to use cloud and fog computing technology integrated with Artificial Intelligence techniques of Machine Learning to aid in a smooth transition when assigning students to their year groups, and provide leveling and differentiation information of students who relocate from a particular education curriculum to another, whilst also having the ability to store and access student data from anywhere throughout their academic journey.

Keywords: admissions, algorithms, cloud computing, differentiation, fog computing, levelling, machine learning

Procedia PDF Downloads 145
1177 Solar-Blind Ni-Schottky Photodetector Based on MOCVD Grown ZnGa₂O₄

Authors: Taslim Khan, Ray Hua Horng, Rajendra Singh

Abstract:

This study presents a comprehensive analysis of the design, fabrication, and performance evaluation of a solar-blind Schottky photodetector based on ZnGa₂O₄ grown via MOCVD, utilizing Ni/Au as the Schottky electrode. ZnGa₂O₄, with its wide bandgap of 5.2 eV, is well-suited for high-performance solar-blind photodetection applications. The photodetector demonstrates an impressive responsivity of 280 A/W, indicating its exceptional sensitivity within the solar-blind ultraviolet band. One of the device's notable attributes is its high rejection ratio of 10⁵, which effectively filters out unwanted background signals, enhancing its reliability in various environments. The photodetector also boasts a photodetector responsivity contrast ratio (PDCR) of 10⁷, showcasing its ability to detect even minor changes in incident UV light. Additionally, the device features an outstanding detective of 10¹⁸ Jones, underscoring its capability to precisely detect faint UV signals. It exhibits a fast response time of 80 ms and an ON/OFF ratio of 10⁵, making it suitable for real-time UV sensing applications. The noise-equivalent power (NEP) of 10^-17 W/Hz further highlights its efficiency in detecting low-intensity UV signals. The photodetector also achieves a high forward-to-backward current rejection ratio of 10⁶, ensuring high selectivity. Furthermore, the device maintains an extremely low dark current of approximately 0.1 pA. These findings position the ZnGa₂O₄-based Schottky photodetector as a leading candidate for solar-blind UV detection applications. It offers a compelling combination of sensitivity, selectivity, and operational efficiency, making it a highly promising tool for environments requiring precise and reliable UV detection.

Keywords: wideband gap, solar blind photodetector, MOCVD, zinc gallate

Procedia PDF Downloads 43
1176 Review of Theories and Applications of Genetic Programing in Sediment Yield Modeling

Authors: Adesoji Tunbosun Jaiyeola, Josiah Adeyemo

Abstract:

Sediment yield can be considered to be the total sediment load that leaves a drainage basin. The knowledge of the quantity of sediments present in a river at a particular time can lead to better flood capacity in reservoirs and consequently help to control over-bane flooding. Furthermore, as sediment accumulates in the reservoir, it gradually loses its ability to store water for the purposes for which it was built. The development of hydrological models to forecast the quantity of sediment present in a reservoir helps planners and managers of water resources systems, to understand the system better in terms of its problems and alternative ways to address them. The application of artificial intelligence models and technique to such real-life situations have proven to be an effective approach of solving complex problems. This paper makes an extensive review of literature relevant to the theories and applications of evolutionary algorithms, and most especially genetic programming. The successful applications of genetic programming as a soft computing technique were reviewed in sediment modelling and other branches of knowledge. Some fundamental issues such as benchmark, generalization ability, bloat and over-fitting and other open issues relating to the working principles of GP, which needs to be addressed by the GP community were also highlighted. This review aim to give GP theoreticians, researchers and the general community of GP enough research direction, valuable guide and also keep all stakeholders abreast of the issues which need attention during the next decade for the advancement of GP.

Keywords: benchmark, bloat, generalization, genetic programming, over-fitting, sediment yield

Procedia PDF Downloads 452
1175 Performance Comparison of Different Regression Methods for a Polymerization Process with Adaptive Sampling

Authors: Florin Leon, Silvia Curteanu

Abstract:

Developing complete mechanistic models for polymerization reactors is not easy, because complex reactions occur simultaneously; there is a large number of kinetic parameters involved and sometimes the chemical and physical phenomena for mixtures involving polymers are poorly understood. To overcome these difficulties, empirical models based on sampled data can be used instead, namely regression methods typical of machine learning field. They have the ability to learn the trends of a process without any knowledge about its particular physical and chemical laws. Therefore, they are useful for modeling complex processes, such as the free radical polymerization of methyl methacrylate achieved in a batch bulk process. The goal is to generate accurate predictions of monomer conversion, numerical average molecular weight and gravimetrical average molecular weight. This process is associated with non-linear gel and glass effects. For this purpose, an adaptive sampling technique is presented, which can select more samples around the regions where the values have a higher variation. Several machine learning methods are used for the modeling and their performance is compared: support vector machines, k-nearest neighbor, k-nearest neighbor and random forest, as well as an original algorithm, large margin nearest neighbor regression. The suggested method provides very good results compared to the other well-known regression algorithms.

Keywords: batch bulk methyl methacrylate polymerization, adaptive sampling, machine learning, large margin nearest neighbor regression

Procedia PDF Downloads 307
1174 Development and Structural Characterization of a Snack Food with Added Type 4 Extruded Resistant Starch

Authors: Alberto A. Escobar Puentes, G. Adriana García, Luis F. Cuevas G., Alejandro P. Zepeda, Fernando B. Martínez, Susana A. Rincón

Abstract:

Snack foods are usually classified as ‘junk food’ because have little nutritional value. However, due to the increase on the demand and third generation (3G) snacks market, low price and easy to prepare, can be considered as carriers of compounds with certain nutritional value. Resistant starch (RS) is classified as a prebiotic fiber it helps to control metabolic problems and has anti-cancer colon properties. The active compound can be developed by chemical cross-linking of starch with phosphate salts to obtain a type 4 resistant starch (RS4). The chemical reaction can be achieved by extrusion, a process widely used to produce snack foods, since it's versatile and a low-cost procedure. Starch is the major ingredient for snacks 3G manufacture, and the seeds of sorghum contain high levels of starch (70%), the most drought-tolerant gluten-free cereal. Due to this, the aim of this research was to develop a snack (3G), with RS4 in optimal conditions extrusion (previously determined) from sorghum starch, and carry on a sensory, chemically and structural characterization. A sample (200 g) of sorghum starch was conditioned with 4% sodium trimetaphosphate/ sodium tripolyphosphate (99:1) and set to 28.5% of moisture content. Then, the sample was processed in a single screw extruder equipped with rectangular die. The inlet, transport and output temperatures were 60°C, 134°C and 70°C, respectively. The resulting pellets were expanded in a microwave oven. The expansion index (EI), penetration force (PF) and sensory analysis were evaluated in the expanded pellets. The pellets were milled to obtain flour and RS content, degree of substitution (DS), and percentage of phosphorus (% P) were measured. Spectroscopy [Fourier Transform Infrared (FTIR)], X-ray diffraction, differential scanning calorimetry (DSC) and scanning electron microscopy (SEM) analysis were performed in order to determine structural changes after the process. The results in 3G were as follows: RS, 17.14 ± 0.29%; EI, 5.66 ± 0.35 and PF, 5.73 ± 0.15 (N). Groups of phosphate were identified in the starch molecule by FTIR: DS, 0.024 ± 0.003 and %P, 0.35±0.15 [values permitted as food additives (<4 %P)]. In this work an increase of the gelatinization temperature after the crosslinking of starch was detected; the loss of granular and vapor bubbles after expansion were observed by SEM; By using X-ray diffraction, loss of crystallinity was observed after extrusion process. Finally, a snack (3G) was obtained with RS4 developed by extrusion technology. The sorghum starch was efficient for snack 3G production.

Keywords: extrusion, resistant starch, snack (3G), Sorghum

Procedia PDF Downloads 313
1173 Spectrogram Pre-Processing to Improve Isotopic Identification to Discriminate Gamma and Neutrons Sources

Authors: Mustafa Alhamdi

Abstract:

Industrial application to classify gamma rays and neutron events is investigated in this study using deep machine learning. The identification using a convolutional neural network and recursive neural network showed a significant improvement in predication accuracy in a variety of applications. The ability to identify the isotope type and activity from spectral information depends on feature extraction methods, followed by classification. The features extracted from the spectrum profiles try to find patterns and relationships to present the actual spectrum energy in low dimensional space. Increasing the level of separation between classes in feature space improves the possibility to enhance classification accuracy. The nonlinear nature to extract features by neural network contains a variety of transformation and mathematical optimization, while principal component analysis depends on linear transformations to extract features and subsequently improve the classification accuracy. In this paper, the isotope spectrum information has been preprocessed by finding the frequencies components relative to time and using them as a training dataset. Fourier transform implementation to extract frequencies component has been optimized by a suitable windowing function. Training and validation samples of different isotope profiles interacted with CdTe crystal have been simulated using Geant4. The readout electronic noise has been simulated by optimizing the mean and variance of normal distribution. Ensemble learning by combing voting of many models managed to improve the classification accuracy of neural networks. The ability to discriminate gamma and neutron events in a single predication approach using deep machine learning has shown high accuracy using deep learning. The paper findings show the ability to improve the classification accuracy by applying the spectrogram preprocessing stage to the gamma and neutron spectrums of different isotopes. Tuning deep machine learning models by hyperparameter optimization of neural network models enhanced the separation in the latent space and provided the ability to extend the number of detected isotopes in the training database. Ensemble learning contributed significantly to improve the final prediction.

Keywords: machine learning, nuclear physics, Monte Carlo simulation, noise estimation, feature extraction, classification

Procedia PDF Downloads 154
1172 Photocatalytic Degradation of Methylene Blue Dye Using Cuprous Oxide/Graphene Nanocomposite

Authors: Bekan Bogale, Tsegaye Girma Asere, Tilahun Yai, Fekadu Melak

Abstract:

Aims: To study photocatalytic degradation of methylene blue dye on cuprous oxide/graphene nanocomposite. Background: Cuprous oxide (Cu2O) nanoparticles are among the metal oxides that demonstrated photocatalytic activity. However, the stability of Cu2O nanoparticles due to the fast recombination rate of electron/hole pairs remains a significant challenge in their photocatalytic applications. This, in turn, leads to mismatching of the effective bandgap separation, tending to reduce the photocatalytic activity of the desired organic waste (MB). To overcome these limitations, graphene has been combined with cuprous oxides, resulting in cuprous oxide/graphene nanocomposite as a promising photocatalyst. Objective: In this study, Cu2O/graphene nanocomposite was synthesized and evaluated for its photocatalytic performance of methylene blue (MB) dye degradation. Method: Cu2O/graphene nanocomposites were synthesized from graphite powder and copper nitrate using the facile sol-gel method. Batch experiments have been conducted to assess the applications of the nanocomposites for MB degradation. Parameters such as contact time, catalyst dosage, and pH of the solution were optimized for maximum MB degradation. The prepared nanocomposites were characterized by using UV-Vis, FTIR, XRD, and SEM. The photocatalytic performance of Cu2O/graphene nanocomposites was compared against Cu2O nanoparticles for cationic MB dye degradation. Results: Cu2O/graphene nanocomposite exhibits higher photocatalytic activity for MB degradation (with a degradation efficiency of 94%) than pure Cu2O nanoparticles (67%). This has been accomplished after 180 min of irradiation under visible light. The kinetics of MB degradation by Cu2O/graphene composites can be demonstrated by the second-order kinetic model. The synthesized nanocomposite can be used for more than three cycles of photocatalytic MB degradation. Conclusion: This work indicated new insights into Cu2O/graphene nanocomposite as high-performance in photocatalysis to degrade MB, playing a great role in environmental protection in relation to MB dye.

Keywords: methylene blue, photocatalysis, cuprous oxide, graphene nanocomposite

Procedia PDF Downloads 196
1171 Distinguishing between Bacterial and Viral Infections Based on Peripheral Human Blood Tests Using Infrared Microscopy and Multivariate Analysis

Authors: H. Agbaria, A. Salman, M. Huleihel, G. Beck, D. H. Rich, S. Mordechai, J. Kapelushnik

Abstract:

Viral and bacterial infections are responsible for variety of diseases. These infections have similar symptoms like fever, sneezing, inflammation, vomiting, diarrhea and fatigue. Thus, physicians may encounter difficulties in distinguishing between viral and bacterial infections based on these symptoms. Bacterial infections differ from viral infections in many other important respects regarding the response to various medications and the structure of the organisms. In many cases, it is difficult to know the origin of the infection. The physician orders a blood, urine test, or 'culture test' of tissue to diagnose the infection type when it is necessary. Using these methods, the time that elapses between the receipt of patient material and the presentation of the test results to the clinician is typically too long ( > 24 hours). This time is crucial in many cases for saving the life of the patient and for planning the right medical treatment. Thus, rapid identification of bacterial and viral infections in the lab is of great importance for effective treatment especially in cases of emergency. Blood was collected from 50 patients with confirmed viral infection and 50 with confirmed bacterial infection. White blood cells (WBCs) and plasma were isolated and deposited on a zinc selenide slide, dried and measured under a Fourier transform infrared (FTIR) microscope to obtain their infrared absorption spectra. The acquired spectra of WBCs and plasma were analyzed in order to differentiate between the two types of infections. In this study, the potential of FTIR microscopy in tandem with multivariate analysis was evaluated for the identification of the agent that causes the human infection. The method was used to identify the infectious agent type as either bacterial or viral, based on an analysis of the blood components [i.e., white blood cells (WBC) and plasma] using their infrared vibrational spectra. The time required for the analysis and evaluation after obtaining the blood sample was less than one hour. In the analysis, minute spectral differences in several bands of the FTIR spectra of WBCs were observed between groups of samples with viral and bacterial infections. By employing the techniques of feature extraction with linear discriminant analysis (LDA), a sensitivity of ~92 % and a specificity of ~86 % for an infection type diagnosis was achieved. The present preliminary study suggests that FTIR spectroscopy of WBCs is a potentially feasible and efficient tool for the diagnosis of the infection type.

Keywords: viral infection, bacterial infection, linear discriminant analysis, plasma, white blood cells, infrared spectroscopy

Procedia PDF Downloads 226
1170 Finite Deformation of a Dielectric Elastomeric Spherical Shell Based on a New Nonlinear Electroelastic Constitutive Theory

Authors: Odunayo Olawuyi Fadodun

Abstract:

Dielectric elastomers (DEs) are a type of intelligent materials with salient features like electromechanical coupling, lightweight, fast actuation speed, low cost and high energy density that make them good candidates for numerous engineering applications. This paper adopts a new nonlinear electroelastic constitutive theory to examine radial deformation of a pressurized thick-walled spherical shell of soft dielectric material with compliant electrodes on its inner and outer surfaces. A general formular for the internal pressure, which depends on the deformation and a potential difference between boundary electrodes or uniform surface charge distributions, is obtained in terms of special function. To illustrate the effects of an applied electric field on the mechanical behaviour of the shell, three different energy functions with distinct mechanical properties are employed for numerical purposes. The observed behaviour of the shells is preserved in the presence of an applied electric field, and the influence of the field due to a potential difference declines more slowly with the increasing deformation to that produced by a surface charge. Counterpart results are then presented for the thin-walled shell approximation as a limiting case of a thick-walled shell without restriction on the energy density. In the absence of internal pressure, it is obtained that inflation is caused by the application of an electric field. The resulting numerical solutions of the theory presented in this work are in agreement with those predicted by the generally adopted Dorfmann and Ogden model.

Keywords: constitutive theory, elastic dielectric, electroelasticity, finite deformation, nonlinear response, spherical shell

Procedia PDF Downloads 98
1169 Machine Learning Models for the Prediction of Heating and Cooling Loads of a Residential Building

Authors: Aaditya U. Jhamb

Abstract:

Due to the current energy crisis that many countries are battling, energy-efficient buildings are the subject of extensive research in the modern technological era because of growing worries about energy consumption and its effects on the environment. The paper explores 8 factors that help determine energy efficiency for a building: (relative compactness, surface area, wall area, roof area, overall height, orientation, glazing area, and glazing area distribution), with Tsanas and Xifara providing a dataset. The data set employed 768 different residential building models to anticipate heating and cooling loads with a low mean squared error. By optimizing these characteristics, machine learning algorithms may assess and properly forecast a building's heating and cooling loads, lowering energy usage while increasing the quality of people's lives. As a result, the paper studied the magnitude of the correlation between these input factors and the two output variables using various statistical methods of analysis after determining which input variable was most closely associated with the output loads. The most conclusive model was the Decision Tree Regressor, which had a mean squared error of 0.258, whilst the least definitive model was the Isotonic Regressor, which had a mean squared error of 21.68. This paper also investigated the KNN Regressor and the Linear Regression, which had to mean squared errors of 3.349 and 18.141, respectively. In conclusion, the model, given the 8 input variables, was able to predict the heating and cooling loads of a residential building accurately and precisely.

Keywords: energy efficient buildings, heating load, cooling load, machine learning models

Procedia PDF Downloads 102
1168 Questioning the Relationship Between Young People and Fake News Through Their Use of Social Media

Authors: Marion Billard

Abstract:

This paper will focus on the question of the real relationship between young people and fake news. Fake news is one of today’s main issues in the world of information and communication. Social media and its democratization helped to spread false information. According to traditional beliefs, young people are more inclined to believe what they read through social media. But, the individuals concerned, think that they are more inclined to make a distinction between real and fake news. This phenomenon is due to their use of the internet and social media from an early age. During the 2016 and 2017 French and American presidential campaigns, the term fake news was in the mouth of the entire world and became a real issue in the field of information. While young people were informing themselves with newspapers or television until the beginning of the ’90s, Gen Z (meaning people born between 1997 and 2010), has always been immersed in this world of fast communication. They know how to use social media from a young age and the internet has no secret for them. Today, despite the sporadic use of traditional media, young people tend to turn to their smartphones and social networks such as Instagram or Twitter to stay abreast of the latest news. The growth of social media information led to an “ambient journalism”, giving access to an endless quantity of information. Waking up in the morning, young people will see little posts with short texts supplying the essential of the news, without, for the most, many details. As a result, impressionable people are not able to do a distinction between real media, and “junk news” or Fake News. This massive use of social media is probably explained by the inability of the youngsters to find connections between the communication of the traditional media and what they are living. The question arises if this over-confidence of the young people in their ability to distinguish between accurate and fake news would not make it more difficult for them to examine critically the information. Their relationship with media and fake news is more complex than popular opinion. Today’s young people are not the master in the quest for information, nor inherently the most impressionable public on social media.

Keywords: fake news, youngsters, social media, information, generation

Procedia PDF Downloads 165
1167 Designing and Prototyping Permanent Magnet Generators for Wind Energy

Authors: T. Asefi, J. Faiz, M. A. Khan

Abstract:

This paper introduces dual rotor axial flux machines with surface mounted and spoke type ferrite permanent magnets with concentrated windings; they are introduced as alternatives to a generator with surface mounted Nd-Fe-B magnets. The output power, voltage, speed and air gap clearance for all the generators are identical. The machine designs are optimized for minimum mass using a population-based algorithm, assuming the same efficiency as the Nd-Fe-B machine. A finite element analysis (FEA) is applied to predict the performance, emf, developed torque, cogging torque, no load losses, leakage flux and efficiency of both ferrite generators and that of the Nd-Fe-B generator. To minimize cogging torque, different rotor pole topologies and different pole arc to pole pitch ratios are investigated by means of 3D FEA. It was found that the surface mounted ferrite generator topology is unable to develop the nominal electromagnetic torque, and has higher torque ripple and is heavier than the spoke type machine. Furthermore, it was shown that the spoke type ferrite permanent magnet generator has favorable performance and could be an alternative to rare-earth permanent magnet generators, particularly in wind energy applications. Finally, the analytical and numerical results are verified using experimental results.

Keywords: axial flux, permanent magnet generator, dual rotor, ferrite permanent magnet generator, finite element analysis, wind turbines, cogging torque, population-based algorithms

Procedia PDF Downloads 156
1166 Ghrelin, Obestatin and Ghrelin/Obestatin Ratio: A Postprandial Study in Healthy Subjects of Normal Weight

Authors: Panagiotis T. Kanellos, Vaios T. Karathanos, Andriana C. Kaliora

Abstract:

Introduction: The role of ghrelin and obestatin in appetite regulation has been investigated. However, data on ghrelin and obestatin changes after food ingestion are negligible. Objective: We aimed at assessing the appetite-regulating hormones, ghrelin, and obestatin, and furthermore calculate ghrelin/obestatin ratio in healthy normal-weight subjects after consumption of raisins. This survey is a comparative study of a glucose control with raisins containing fructose and glucose in similar concentrations as well as fibers. Methodology: Ten apparently healthy subjects who reported no history of glucose intolerance, diabetes, gastrointestinal disorders, or recent use of any antibiotics were enrolled in the study. The raisins used (Vitis vinifera) originate in Greece and are distributed worldwide as Corinthian raisins. In a randomized crossover design, all subjects after an overnight fast consumed, either 50g of glucose diluted in 240 mL of water (control) or 74 g of raisins (sugar content 50 g) with a 5-day interval between individual trials. Vein blood samples were collected at baseline and at 60, 120 and 180 min postprandially. In blood samples ghrelin and obestatin were measured applying specific enzyme linked immuno absorbent assays. Results: The subjects were of mean age 26.3 years, with BMI of 21.6 kg/m2, waist circumference of 77.7 cm, normal serum lipidemic parameters and normal HbA1c levels. Ghrelin levels were significantly lower after raisin consumption compared to glucose at 120 and at 180 min post-ingestion (p= 0.011 and p= 0.035, respectively). However, obestatin did not reach statistical significance between the two interventions. The ghrelin/obestatin ratio was found significantly lower (p=0.020) at 120 min after raisin ingestion compared to control. Conclusion: Two isocaloric foods containing equal amounts of sugars, however with a different composition, have different effects on appetite hormones ghrelin and obestatin in normal-weight healthy subjects.

Keywords: appetite, ghrelin, obestatin, raisins

Procedia PDF Downloads 401
1165 Sequential Pattern Mining from Data of Medical Record with Sequential Pattern Discovery Using Equivalent Classes (SPADE) Algorithm (A Case Study : Bolo Primary Health Care, Bima)

Authors: Rezky Rifaini, Raden Bagus Fajriya Hakim

Abstract:

This research was conducted at the Bolo primary health Care in Bima Regency. The purpose of the research is to find out the association pattern that is formed of medical record database from Bolo Primary health care’s patient. The data used is secondary data from medical records database PHC. Sequential pattern mining technique is the method that used to analysis. Transaction data generated from Patient_ID, Check_Date and diagnosis. Sequential Pattern Discovery Algorithms Using Equivalent Classes (SPADE) is one of the algorithm in sequential pattern mining, this algorithm find frequent sequences of data transaction, using vertical database and sequence join process. Results of the SPADE algorithm is frequent sequences that then used to form a rule. It technique is used to find the association pattern between items combination. Based on association rules sequential analysis with SPADE algorithm for minimum support 0,03 and minimum confidence 0,75 is gotten 3 association sequential pattern based on the sequence of patient_ID, check_Date and diagnosis data in the Bolo PHC.

Keywords: diagnosis, primary health care, medical record, data mining, sequential pattern mining, SPADE algorithm

Procedia PDF Downloads 406
1164 Assessing Economic Losses Of 2104 Flood Disaster: A Case Study on Dabong, Kelantan, Malaysia

Authors: Ahmad Hamidi Mohamed, Jamaluddin Othman, Mashitah Suid, Mohd Zaim Mohd Shukri

Abstract:

Floods are considered an annual natural disaster in Kelantan. However, the record-setting flood of 2014 was a 'tsunami-like disaster'. A study has been conducted with the objectives to assess the economic impact of the flood to the resident of Dabong area in Kelantan Darul Naim, Malaysia. This area was selected due to the severity during the flood. The impacts of flood on local people were done by conducting structured interviews with the use of questionnaires. The questionnaire was intended to acquire information on losses faced by Dabong residence. Questionnaires covered various areas of inconveniences suffered with respect to health effects, including illnesses suffered, their intensities, duration and their associated costs. Loss of productivity and quality of life was also assessed. Inquiries were made to Government agencies to obtain relevant statistical data regarding the loss due to the flood tragedy. The data collected by giving formal request to the governmental agencies and formal meetings were done. From the study a staggering amount of losses were calculated. This figure comes from losses of property, Farmers/Agriculture, Traders/Business, Health, Insurance and Governmental losses. Flood brings hardship to the people of Dabong and these losses of home will cause inconveniences to the society. The huge amount of economic loss extracted from this study shows that federal and state government of Kelantan need to find out the cause of the major flood in 2014. Fast and effective measures have to be planned and implemented in flood prone area to prevent same tragedy happens in the future.

Keywords: economic impact, flood tragedy, Malaysia, property losses

Procedia PDF Downloads 275
1163 Using Cyclic Structure to Improve Inference on Network Community Structure

Authors: Behnaz Moradijamei, Michael Higgins

Abstract:

Identifying community structure is a critical task in analyzing social media data sets often modeled by networks. Statistical models such as the stochastic block model have proven to explain the structure of communities in real-world network data. In this work, we develop a goodness-of-fit test to examine community structure's existence by using a distinguishing property in networks: cyclic structures are more prevalent within communities than across them. To better understand how communities are shaped by the cyclic structure of the network rather than just the number of edges, we introduce a novel method for deciding on the existence of communities. We utilize these structures by using renewal non-backtracking random walk (RNBRW) to the existing goodness-of-fit test. RNBRW is an important variant of random walk in which the walk is prohibited from returning back to a node in exactly two steps and terminates and restarts once it completes a cycle. We investigate the use of RNBRW to improve the performance of existing goodness-of-fit tests for community detection algorithms based on the spectral properties of the adjacency matrix. Our proposed test on community structure is based on the probability distribution of eigenvalues of the normalized retracing probability matrix derived by RNBRW. We attempt to make the best use of asymptotic results on such a distribution when there is no community structure, i.e., asymptotic distribution under the null hypothesis. Moreover, we provide a theoretical foundation for our statistic by obtaining the true mean and a tight lower bound for RNBRW edge weights variance.

Keywords: hypothesis testing, RNBRW, network inference, community structure

Procedia PDF Downloads 154
1162 Enhancement of Mass Transport and Separations of Species in a Electroosmotic Flow by Distinct Oscillatory Signals

Authors: Carlos Teodoro, Oscar Bautista

Abstract:

In this work, we analyze theoretically the mass transport in a time-periodic electroosmotic flow through a parallel flat plate microchannel under different periodic functions of the applied external electric field. The microchannel connects two reservoirs having different constant concentrations of an electro-neutral solute, and the zeta potential of the microchannel walls are assumed to be uniform. The governing equations that allow determining the mass transport in the microchannel are given by the Poisson-Boltzmann equation, the modified Navier-Stokes equations, where the Debye-Hückel approximation is considered (the zeta potential is less than 25 mV), and the species conservation. These equations are nondimensionalized and four dimensionless parameters appear which control the mass transport phenomenon. In this sense, these parameters are an angular Reynolds, the Schmidt and the Péclet numbers, and an electrokinetic parameter representing the ratio of the half-height of the microchannel to the Debye length. To solve the mathematical model, first, the electric potential is determined from the Poisson-Boltzmann equation, which allows determining the electric force for various periodic functions of the external electric field expressed as Fourier series. In particular, three different excitation wave forms of the external electric field are assumed, a) sawteeth, b) step, and c) a periodic irregular functions. The periodic electric forces are substituted in the modified Navier-Stokes equations, and the hydrodynamic field is derived for each case of the electric force. From the obtained velocity fields, the species conservation equation is solved and the concentration fields are found. Numerical calculations were done by considering several binary systems where two dilute species are transported in the presence of a carrier. It is observed that there are different angular frequencies of the imposed external electric signal where the total mass transport of each species is the same, independently of the molecular diffusion coefficient. These frequencies are called crossover frequencies and are obtained graphically at the intersection when the total mass transport is plotted against the imposed frequency. The crossover frequencies are different depending on the Schmidt number, the electrokinetic parameter, the angular Reynolds number, and on the type of signal of the external electric field. It is demonstrated that the mass transport through the microchannel is strongly dependent on the modulation frequency of the applied particular alternating electric field. Possible extensions of the analysis to more complicated pulsation profiles are also outlined.

Keywords: electroosmotic flow, mass transport, oscillatory flow, species separation

Procedia PDF Downloads 218
1161 Programming without Code: An Approach and Environment to Conditions-On-Data Programming

Authors: Philippe Larvet

Abstract:

This paper presents the concept of an object-based programming language where tests (if... then... else) and control structures (while, repeat, for...) disappear and are replaced by conditions on data. According to the object paradigm, by using this concept, data are still embedded inside objects, as variable-value couples, but object methods are expressed into the form of logical propositions (‘conditions on data’ or COD).For instance : variable1 = value1 AND variable2 > value2 => variable3 = value3. Implementing this approach, a central inference engine turns and examines objects one after another, collecting all CODs of each object. CODs are considered as rules in a rule-based system: the left part of each proposition (left side of the ‘=>‘ sign) is the premise and the right part is the conclusion. So, premises are evaluated and conclusions are fired. Conclusions modify the variable-value couples of the object and the engine goes to examine the next object. The paper develops the principles of writing CODs instead of complex algorithms. Through samples, the paper also presents several hints for implementing a simple mechanism able to process this ‘COD language’. The proposed approach can be used within the context of simulation, process control, industrial systems validation, etc. By writing simple and rigorous conditions on data, instead of using classical and long-to-learn languages, engineers and specialists can easily simulate and validate the functioning of complex systems.

Keywords: conditions on data, logical proposition, programming without code, object-oriented programming, system simulation, system validation

Procedia PDF Downloads 225
1160 Artificial Neural Network in Ultra-High Precision Grinding of Borosilicate-Crown Glass

Authors: Goodness Onwuka, Khaled Abou-El-Hossein

Abstract:

Borosilicate-crown (BK7) glass has found broad application in the optic and automotive industries and the growing demands for nanometric surface finishes is becoming a necessity in such applications. Thus, it has become paramount to optimize the parameters influencing the surface roughness of this precision lens. The research was carried out on a 4-axes Nanoform 250 precision lathe machine with an ultra-high precision grinding spindle. The experiment varied the machining parameters of feed rate, wheel speed and depth of cut at three levels for different combinations using Box Behnken design of experiment and the resulting surface roughness values were measured using a Taylor Hobson Dimension XL optical profiler. Acoustic emission monitoring technique was applied at a high sampling rate to monitor the machining process while further signal processing and feature extraction methods were implemented to generate the input to a neural network algorithm. This paper highlights the training and development of a back propagation neural network prediction algorithm through careful selection of parameters and the result show a better classification accuracy when compared to a previously developed response surface model with very similar machining parameters. Hence artificial neural network algorithms provide better surface roughness prediction accuracy in the ultra-high precision grinding of BK7 glass.

Keywords: acoustic emission technique, artificial neural network, surface roughness, ultra-high precision grinding

Procedia PDF Downloads 305
1159 Critical Reading Achievement of Rural Migrant Children in China: The Roles of Educational Expectation

Authors: Liman Zhao, Jianlong Zhang, Mingman Ren, Chuang Wang, Jian Liu

Abstract:

Rural migrant children have become a fast-growing population in China as a consequence of the large-scale population flow from rural to urban areas in the context of urbanization. In China, the socioeconomic status of migrant children is relatively low in comparison to non-migrant children. Parents of migrant children often work in occupations with long working hours, high labor intensity, and low pay due to their poor academic qualifications. Most migrant children's parents have not received higher education and have no time to read with their children. The family of migrant children usually does not have a good collection of books either, which leads to these children’s insufficient reading and low reading levels. Moreover, migrant children frequently relocate with their parents, and their needs for knowledge and reading are often neglected by schools, which puts migrant children at risk of academic failure in China. Therefore, the academic achievement of rural migrant children has become a focus of education in China. This study explores the relationship between the educational expectation of rural migrant children and their critical reading competence in general and the moderating effect of the difference between parental educational expectation to their children and the children’s own educational expectation. The responses to a survey from 5113 seventh-grade children in a district of the capital city in China revealed that children who moved to cities in grades 4-6 of primary school performed the best in critical reading, and children who moved to cities after middle school showed the worst performance in critical reading. In addition, parents’ educational expectations of their children and their own educational expectations were both significant predictors of rural migrant children’s reading competence. The higher a child's expectations of a degree and the smaller the gap between parents' expectations of a child's education and the child's own education expectations, the better the child's performance in critical reading.

Keywords: educational expectation, critical reading competence, rural migrant children, moderating effect

Procedia PDF Downloads 205
1158 Commercial Law Between Custom and Islamic Law

Authors: Mohamed Zakareia Ghazy Aly Belal

Abstract:

Commercial law is the set of legal rules that apply to business and regulates the trade of trade. The meaning of this is that the commercial law regulates certain relations only that arises as a result of carrying out certain businesses. which are business, as it regulates the activity of a specific sect, the sect of merchants, and the commercial law as other branches of the law has characteristics that distinguish it from other laws and various, and various sources from which its basis is derived from It is the objective or material source. the historical source, the official source and the interpretative source, and we are limited to official sources and explanatory sources. so what do you see what these sources are, and what is their degree and strength in taking it in commercial disputes. The first topic / characteristics of commercial law. Commercial law has become necessary for the world of trade and economics, which cannot be dispensed with, given the reasons that have been set as legal rules for commercial field. In fact, it is sufficient to refer to the stability and stability of the environment, and in exchange for the movement and the speed in which the commercial environment is in addition to confidence and credit. the characteristic of speed and the characteristic of trust, and credit are the ones that justify the existence of commercial law. Business is fast, while civil business is slow, stable and stability. The person concludes civil transactions in his life only a little. And before doing any civil action. he must have a period of thinking and scrutiny, and the investigation is the person who wants the husband, he must have a period of thinking and scrutiny. as if the person who wants to acquire a house to live with with his family, he must search and investigate Discuss the price before the conclusion of a purchase contract. In the commercial field, transactions take place very quickly because the time factor has an important role in concluding deals and achieving profits. This is because the merchant in contracting about a specific deal would cause a loss to the merchant due to the linkage of the commercial law with the fluctuations of the economy and the market. The merchant may also conclude more than one deal in one and short time. And that is due to the absence of commercial law from the formalities and procedures that hinder commercial transactions.

Keywords: law, commercial law, business, commercial field

Procedia PDF Downloads 76
1157 Improving Photocatalytic Efficiency of TiO2 Films Incorporated with Natural Geopolymer for Sunlight-Driven Water Purification

Authors: Satam Alotibi, Haya A. Al-Sunaidi, Almaymunah M. AlRoibah, Zahraa H. Al-Omaran, Mohammed Alyami, Fatehia S. Alhakami, Abdellah Kaiba, Mazen Alshaaer, Talal F. Qahtan

Abstract:

This research study presents a novel approach to harnessing the potential of natural geopolymer in conjunction with TiO₂ nanoparticles (TiO₂ NPs) for the development of highly efficient photocatalytic materials for water decontamination. The study begins with the formulation of a geopolymer paste derived from natural sources, which is subsequently applied as a coating on glass substrates and allowed to air-dry at room temperature. The result is a series of geopolymer-coated glass films, serving as the foundation for further experimentation. To enhance the photocatalytic capabilities of these films, a critical step involves immersing them in a suspension of TiO₂ nanoparticles (TiO₂ NPs) in water for varying durations. This immersion process yields geopolymer-loaded TiO₂ NPs films with varying concentrations, setting the stage for comprehensive characterization and analysis. A range of advanced analytical techniques, including UV-Vis spectroscopy, Fourier-transform infrared spectroscopy (FTIR), Raman spectroscopy, scanning electron microscopy (SEM), X-ray photoelectron spectroscopy (XPS), and atomic force microscopy (AFM), were meticulously employed to assess the structural, morphological, and chemical properties of the geopolymer-based TiO₂ films. These analyses provided invaluable insights into the materials' composition and surface characteristics. The culmination of this research effort sees the geopolymer-based TiO₂ films being repurposed as immobilized photocatalytic reactors for water decontamination under natural sunlight irradiation. Remarkably, the results revealed exceptional photocatalytic performance that exceeded the capabilities of conventional TiO₂-based photocatalysts. This breakthrough underscores the significant potential of natural geopolymer as a versatile and highly effective matrix for enhancing the photocatalytic efficiency of TiO₂ nanoparticles in water treatment applications. In summary, this study represents a significant advancement in the quest for sustainable and efficient photocatalytic materials for environmental remediation. By harnessing the synergistic effects of natural geopolymer and TiO₂ nanoparticles, these geopolymer-based films exhibit outstanding promise in addressing water decontamination challenges and contribute to the development of eco-friendly solutions for a cleaner and healthier environment.

Keywords: geopolymer, TiO2 nanoparticles, photocatalytic materials, water decontamination, sustainable remediation

Procedia PDF Downloads 71
1156 A Prediction Model Using the Price Cyclicality Function Optimized for Algorithmic Trading in Financial Market

Authors: Cristian Păuna

Abstract:

After the widespread release of electronic trading, automated trading systems have become a significant part of the business intelligence system of any modern financial investment company. An important part of the trades is made completely automatically today by computers using mathematical algorithms. The trading decisions are taken almost instantly by logical models and the orders are sent by low-latency automatic systems. This paper will present a real-time price prediction methodology designed especially for algorithmic trading. Based on the price cyclicality function, the methodology revealed will generate price cyclicality bands to predict the optimal levels for the entries and exits. In order to automate the trading decisions, the cyclicality bands will generate automated trading signals. We have found that the model can be used with good results to predict the changes in market behavior. Using these predictions, the model can automatically adapt the trading signals in real-time to maximize the trading results. The paper will reveal the methodology to optimize and implement this model in automated trading systems. After tests, it is proved that this methodology can be applied with good efficiency in different timeframes. Real trading results will be also displayed and analyzed in order to qualify the methodology and to compare it with other models. As a conclusion, it was found that the price prediction model using the price cyclicality function is a reliable trading methodology for algorithmic trading in the financial market.

Keywords: algorithmic trading, automated trading systems, financial markets, high-frequency trading, price prediction

Procedia PDF Downloads 188
1155 A Multi-Layer Based Architecture for the Development of an Open Source CAD/CAM Integration Virtual Platform

Authors: Alvaro Aguinaga, Carlos Avila, Edgar Cando

Abstract:

This article proposes a n-layer architecture, with a web client as a front-end, for the development of a virtual platform for process simulation on CNC machines. This Open-Source platform includes a CAD-CAM interface drawing primitives, and then used to furnish a CNC program that triggers a touch-screen virtual simulator. The objectives of this project are twofold. First one is an educational component that fosters new alternatives for the CAD-CAM/CNC learning process in undergrad and grade schools and technical and technological institutes emphasizing in the development of critical skills, discussion and collaborative work. The second objective puts together a research and technological component that will take the state of the art in CAD-CAM integration to a new level with the development of optimal algorithms and virtual platforms, on-line availability, that will pave the way for the long-term goal of this project, that is, to have a visible and active graduate school in Ecuador and a world wide Open-Innovation community in the area of CAD-CAM integration and operation of CNC machinery. The virtual platform, developed as a part of this study: (1) delivers improved training process of students, (2) creates a multidisciplinary team and a collaborative work space that will push the new generation of students to face future technological challenges, (3) implements industry standards for CAD/CAM, (4) presents a platform for the development of industrial applications. A protoype of this system was developed and implemented in a network of universities and technological institutes in Ecuador.

Keywords: CAD-CAM integration, virtual platforms, CNC machines, multi-layer based architecture

Procedia PDF Downloads 432
1154 Characteristics and Flight Test Analysis of a Fixed-Wing UAV with Hover Capability

Authors: Ferit Çakıcı, M. Kemal Leblebicioğlu

Abstract:

In this study, characteristics and flight test analysis of a fixed-wing unmanned aerial vehicle (UAV) with hover capability is analyzed. The base platform is chosen as a conventional airplane with throttle, ailerons, elevator and rudder control surfaces, that inherently allows level flight. Then this aircraft is mechanically modified by the integration of vertical propellers as in multi rotors in order to provide hover capability. The aircraft is modeled using basic aerodynamical principles and linear models are constructed utilizing small perturbation theory for trim conditions. Flight characteristics are analyzed by benefiting from linear control theory’s state space approach. Distinctive features of the aircraft are discussed based on analysis results with comparison to conventional aircraft platform types. A hybrid control system is proposed in order to reveal unique flight characteristics. The main approach includes design of different controllers for different modes of operation and a hand-over logic that makes flight in an enlarged flight envelope viable. Simulation tests are performed on mathematical models that verify asserted algorithms. Flight tests conducted in real world revealed the applicability of the proposed methods in exploiting fixed-wing and rotary wing characteristics of the aircraft, which provide agility, survivability and functionality.

Keywords: flight test, flight characteristics, hybrid aircraft, unmanned aerial vehicle

Procedia PDF Downloads 332
1153 Scalable and Accurate Detection of Pathogens from Whole-Genome Shotgun Sequencing

Authors: Janos Juhasz, Sandor Pongor, Balazs Ligeti

Abstract:

Next-generation sequencing, especially whole genome shotgun sequencing, is becoming a common approach to gain insight into the microbiomes in a culture-independent way, even in clinical practice. It does not only give us information about the species composition of an environmental sample but opens the possibility to detect antimicrobial resistance and novel, or currently unknown, pathogens. Accurately and reliably detecting the microbial strains is a challenging task. Here we present a sensitive approach for detecting pathogens in metagenomics samples with special regard to detecting novel variants of known pathogens. We have developed a pipeline that uses fast, short read aligner programs (i.e., Bowtie2/BWA) and comprehensive nucleotide databases. Taxonomic binning is based on the lowest common ancestor (LCA) principle; each read is assigned to a taxon, covering the most significantly hit taxa. This approach helps in balancing between sensitivity and running time. The program was tested both on experimental and synthetic data. The results implicate that our method performs as good as the state-of-the-art BLAST-based ones, furthermore, in some cases, it even proves to be better, while running two orders magnitude faster. It is sensitive and capable of identifying taxa being present only in small abundance. Moreover, it needs two orders of magnitude less reads to complete the identification than MetaPhLan2 does. We analyzed an experimental anthrax dataset (B. anthracis strain BA104). The majority of the reads (96.50%) was classified as Bacillus anthracis, a small portion, 1.2%, was classified as other species from the Bacillus genus. We demonstrate that the evaluation of high-throughput sequencing data is feasible in a reasonable time with good classification accuracy.

Keywords: metagenomics, taxonomy binning, pathogens, microbiome, B. anthracis

Procedia PDF Downloads 139