Search results for: computational methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16910

Search results for: computational methods

16250 Computer Aided Diagnostic System for Detection and Classification of a Brain Tumor through MRI Using Level Set Based Segmentation Technique and ANN Classifier

Authors: Atanu K Samanta, Asim Ali Khan

Abstract:

Due to the acquisition of huge amounts of brain tumor magnetic resonance images (MRI) in clinics, it is very difficult for radiologists to manually interpret and segment these images within a reasonable span of time. Computer-aided diagnosis (CAD) systems can enhance the diagnostic capabilities of radiologists and reduce the time required for accurate diagnosis. An intelligent computer-aided technique for automatic detection of a brain tumor through MRI is presented in this paper. The technique uses the following computational methods; the Level Set for segmentation of a brain tumor from other brain parts, extraction of features from this segmented tumor portion using gray level co-occurrence Matrix (GLCM), and the Artificial Neural Network (ANN) to classify brain tumor images according to their respective types. The entire work is carried out on 50 images having five types of brain tumor. The overall classification accuracy using this method is found to be 98% which is significantly good.

Keywords: brain tumor, computer-aided diagnostic (CAD) system, gray-level co-occurrence matrix (GLCM), tumor segmentation, level set method

Procedia PDF Downloads 513
16249 Computational Assistance of the Research, Using Dynamic Vector Logistics of Processes for Critical Infrastructure Subjects Continuity

Authors: Urbánek Jiří J., Krahulec Josef, Urbánek Jiří F., Johanidesová Jitka

Abstract:

These Computational assistance for the research and modelling of critical infrastructure subjects continuity deal with this paper. It enables us the using of prevailing operation system MS Office (SmartArt...) for mathematical models, using DYVELOP (Dynamic Vector Logistics of Processes) method. It serves for crisis situations investigation and modelling within the organizations of critical infrastructure. In the first part of the paper, it will be introduced entities, operators and actors of DYVELOP method. It uses just three operators of Boolean algebra and four types of the entities: the Environments, the Process Systems, the Cases and the Controlling. The Process Systems (PrS) have five “brothers”: Management PrS, Transformation PrS, Logistic PrS, Event PrS and Operation PrS. The Cases have three “sisters”: Process Cell Case, Use Case and Activity Case. They all need for the controlling of their functions special Ctrl actors, except ENV – it can do without Ctrl. Model´s maps are named the Blazons and they are able mathematically - graphically express the relationships among entities, actors and processes. In the second part of this paper, the rich blazons of DYVELOP method will be used for the discovering and modelling of the cycling cases and their phases. The blazons need live PowerPoint presentation for better comprehension of this paper mission. The crisis management of energetic crisis infrastructure organization is obliged to use the cycles for successful coping of crisis situations. Several times cycling of these cases is a necessary condition for the encompassment of the both the emergency event and the mitigation of organization´s damages. Uninterrupted and continuous cycling process bring for crisis management fruitfulness and it is a good indicator and controlling actor of organizational continuity and its sustainable development advanced possibilities. The research reliable rules are derived for the safety and reliable continuity of energetic critical infrastructure organization in the crisis situation.

Keywords: blazons, computational assistance, DYVELOP method, critical infrastructure

Procedia PDF Downloads 384
16248 Linking Excellence in Biomedical Knowledge and Computational Intelligence Research for Personalized Management of Cardiovascular Diseases within Personal Health Care

Authors: T. Rocha, P. Carvalho, S. Paredes, J. Henriques, A. Bianchi, V. Traver, A. Martinez

Abstract:

The main goal of LINK project is to join competences in intelligent processing in order to create a research ecosystem to address two central scientific and technical challenges for personal health care (PHC) deployment: i) how to merge clinical evidence knowledge in computational decision support systems for PHC management and ii) how to provide achieve personalized services, i.e., solutions adapted to the specific user needs and characteristics. The final goal of one of the work packages (WP2), designated Sustainable Linking and Synergies for Excellence, is the definition, implementation and coordination of the necessary activities to create and to strengthen durable links between the LiNK partners. This work focuses on the strategy that has been followed to achieve the definition of the Research Tracks (RT), which will support a set of actions to be pursued along the LiNK project. These include common research activities, knowledge transfer among the researchers of the consortium, and PhD student and post-doc co-advisement. Moreover, the RTs will establish the basis for the definition of concepts and their evolution to project proposals.

Keywords: LiNK Twin European Project, personal health care, cardiovascular diseases, research tracks

Procedia PDF Downloads 216
16247 CFD Simulation on Gas Turbine Blade and Effect of Twisted Hole Shape on Film Cooling Effectiveness

Authors: Thulodin Mat Lazim, Aminuddin Saat, Ammar Fakhir Abdulwahid, Zaid Sattar Kareem

Abstract:

Film cooling is one of the cooling systems investigated for the application to gas turbine blades. Gas turbines use film cooling in addition to turbulence internal cooling to protect the blades outer surface from hot gases. The present study concentrates on the numerical investigation of film cooling performance for a row of twisted cylindrical holes in modern turbine blade. The adiabatic film effectiveness and the heat transfer coefficient are determined numerical on a flat plate downstream of a row of inclined different cross section area hole exit by using Computational Fluid Dynamics (CFD). The swirling motion of the film coolant was induced the twisted angle of film cooling holes, which inclined an angle of α toward the vertical direction and surface of blade turbine. The holes angle α of the impingement mainstream was changed from 90°, 65°, 45°, 30° and 20°. The film cooling effectiveness on surface of blade turbine wall was measured by using 3D Computational Fluid Dynamics (CFD). Results showed that the effectiveness of rectangular twisted hole has the effectiveness among other cross section area of the hole at blowing ratio (0.5, 1, 1.5 and 2).

Keywords: turbine blade cooling, film cooling, geometry shape of hole, turbulent flow

Procedia PDF Downloads 541
16246 Imputation of Incomplete Large-Scale Monitoring Count Data via Penalized Estimation

Authors: Mohamed Dakki, Genevieve Robin, Marie Suet, Abdeljebbar Qninba, Mohamed A. El Agbani, Asmâa Ouassou, Rhimou El Hamoumi, Hichem Azafzaf, Sami Rebah, Claudia Feltrup-Azafzaf, Nafouel Hamouda, Wed a.L. Ibrahim, Hosni H. Asran, Amr A. Elhady, Haitham Ibrahim, Khaled Etayeb, Essam Bouras, Almokhtar Saied, Ashrof Glidan, Bakar M. Habib, Mohamed S. Sayoud, Nadjiba Bendjedda, Laura Dami, Clemence Deschamps, Elie Gaget, Jean-Yves Mondain-Monval, Pierre Defos Du Rau

Abstract:

In biodiversity monitoring, large datasets are becoming more and more widely available and are increasingly used globally to estimate species trends and con- servation status. These large-scale datasets challenge existing statistical analysis methods, many of which are not adapted to their size, incompleteness and heterogeneity. The development of scalable methods to impute missing data in incomplete large-scale monitoring datasets is crucial to balance sampling in time or space and thus better inform conservation policies. We developed a new method based on penalized Poisson models to impute and analyse incomplete monitoring data in a large-scale framework. The method al- lows parameterization of (a) space and time factors, (b) the main effects of predic- tor covariates, as well as (c) space–time interactions. It also benefits from robust statistical and computational capability in large-scale settings. The method was tested extensively on both simulated and real-life waterbird data, with the findings revealing that it outperforms six existing methods in terms of missing data imputation errors. Applying the method to 16 waterbird species, we estimated their long-term trends for the first time at the entire North African scale, a region where monitoring data suffer from many gaps in space and time series. This new approach opens promising perspectives to increase the accuracy of species-abundance trend estimations. We made it freely available in the r package ‘lori’ (https://CRAN.R-project.org/package=lori) and recommend its use for large- scale count data, particularly in citizen science monitoring programmes.

Keywords: biodiversity monitoring, high-dimensional statistics, incomplete count data, missing data imputation, waterbird trends in North-Africa

Procedia PDF Downloads 157
16245 Vocal Training and Practice Methods: A Glimpse on the South Indian Carnatic Music

Authors: Raghavi Janaswamy, Saraswathi K. Vasudev

Abstract:

Music is one of the supreme arts of expressions, next to the speech itself. Its evolution over centuries has paved the way with a variety of training protocols and performing methods. Indian classical music is one of the most elaborate and refined systems with immense emphasis on the voice culture related to range, breath control, quality of the tone, flexibility and diction. Several exercises namely saraliswaram, jantaswaram, dhatuswaram, upper stayi swaram, alamkaras and varnams lay the required foundation to gain the voice culture and deeper understanding on the voice development and further on to the intricacies of the raga system. This article narrates a few of the Carnatic music training methods with an emphasis on the advanced practice methods for articulating the vocal skills, continuity in the voice, ability to produce gamakams, command in the multiple speeds of rendering with reasonable volume. The creativity on these exercises and their impact on the voice production are discussed. The articulation of the outlined conscious practice methods and vocal exercises bestow the optimum use of the natural human vocal system to not only enhance the signing quality but also to gain health benefits.

Keywords: Carnatic music, Saraliswaram, Varnam, vocal training

Procedia PDF Downloads 179
16244 Constraint-Based Computational Modelling of Bioenergetic Pathway Switching in Synaptic Mitochondria from Parkinson's Disease Patients

Authors: Diana C. El Assal, Fatima Monteiro, Caroline May, Peter Barbuti, Silvia Bolognin, Averina Nicolae, Hulda Haraldsdottir, Lemmer R. P. El Assal, Swagatika Sahoo, Longfei Mao, Jens Schwamborn, Rejko Kruger, Ines Thiele, Kathrin Marcus, Ronan M. T. Fleming

Abstract:

Degeneration of substantia nigra pars compacta dopaminergic neurons is one of the hallmarks of Parkinson's disease. These neurons have a highly complex axonal arborisation and a high energy demand, so any reduction in ATP synthesis could lead to an imbalance between supply and demand, thereby impeding normal neuronal bioenergetic requirements. Synaptic mitochondria exhibit increased vulnerability to dysfunction in Parkinson's disease. After biogenesis in and transport from the cell body, synaptic mitochondria become highly dependent upon oxidative phosphorylation. We applied a systems biochemistry approach to identify the metabolic pathways used by neuronal mitochondria for energy generation. The mitochondrial component of an existing manual reconstruction of human metabolism was extended with manual curation of the biochemical literature and specialised using omics data from Parkinson's disease patients and controls, to generate reconstructions of synaptic and somal mitochondrial metabolism. These reconstructions were converted into stoichiometrically- and fluxconsistent constraint-based computational models. These models predict that Parkinson's disease is accompanied by an increase in the rate of glycolysis and a decrease in the rate of oxidative phosphorylation within synaptic mitochondria. This is consistent with independent experimental reports of a compensatory switching of bioenergetic pathways in the putamen of post-mortem Parkinson's disease patients. Ongoing work, in the context of the SysMedPD project is aimed at computational prediction of mitochondrial drug targets to slow the progression of neurodegeneration in the subset of Parkinson's disease patients with overt mitochondrial dysfunction.

Keywords: bioenergetics, mitochondria, Parkinson's disease, systems biochemistry

Procedia PDF Downloads 295
16243 Preparation of MgO Nanoparticles by Green Methods

Authors: Maryam Sabbaghan, Pegah Sofalgar

Abstract:

Over the past few decades, a significant amount of research activities in the chemical community has been directed towards green synthesis. This area of chemistry has received extensive attention because of environmentally benign processes as well as economically viable. In this article, the MgO nanoparticles were prepared by different methods in the present of ionic liquids. A wide range of Magnesium oxide particle sizes within the nanometer scale is obtained by these methods. The structure of these MgO particles was studied by using X-ray diffraction (XRD), Infrared spectroscopy (IR), and scanning electron microscopy (SEM). It was found that the formation of nanoparticle could involve the role of performed 'nucleus' and used template to control the growth rate of nucleuses. The crystallite size of the MgO products was in a range from 31 to 77 nm.

Keywords: MgO, ionic liquid, nanoparticles, green chemistry

Procedia PDF Downloads 291
16242 A Survey on Concurrency Control Methods in Distributed Database

Authors: Seyed Mohsen Jameii

Abstract:

In the last years, remarkable improvements have been made in the ability of distributed database systems performance. A distributed database is composed of some sites which are connected to each other through network connections. In this system, if good harmonization is not made between different transactions, it may result in database incoherence. Nowadays, because of the complexity of many sites and their connection methods, it is difficult to extend different models in distributed database serially. The principle goal of concurrency control in distributed database is to ensure not interfering in accessibility of common database by different sites. Different concurrency control algorithms have been suggested to use in distributed database systems. In this paper, some available methods have been introduced and compared for concurrency control in distributed database.

Keywords: distributed database, two phase locking protocol, transaction, concurrency

Procedia PDF Downloads 352
16241 Design of Active Power Filters for Harmonics on Power System and Reducing Harmonic Currents

Authors: Düzgün Akmaz, Hüseyin Erişti

Abstract:

In the last few years, harmonics have been occurred with the increasing use of nonlinear loads, and these harmonics have been an ever increasing problem for the line systems. This situation importantly affects the quality of power and gives large losses to the network. An efficient way to solve these problems is providing harmonic compensation through parallel active power filters. Many methods can be used in the control systems of the parallel active power filters which provide the compensation. These methods efficiently affect the performance of the active power filters. For this reason, the chosen control method is significant. In this study, Fourier analysis (FA) control method and synchronous reference frame (SRF) control method are discussed. These control methods are designed for both eliminate harmonics and perform reactive power compensation in MATLAB/Simulink pack program and are tested. The results have been compared for each two methods.

Keywords: parallel active power filters, harmonic compensation, power quality, harmonics

Procedia PDF Downloads 459
16240 Syntax and Words as Evolutionary Characters in Comparative Linguistics

Authors: Nancy Retzlaff, Sarah J. Berkemer, Trudie Strauss

Abstract:

In the last couple of decades, the advent of digitalization of any kind of data was probably one of the major advances in all fields of study. This paves the way for also analysing these data even though they might come from disciplines where there was no initial computational necessity to do so. Especially in linguistics, one can find a rather manual tradition. Still when considering studies that involve the history of language families it is hard to overlook the striking similarities to bioinformatics (phylogenetic) approaches. Alignments of words are such a fairly well studied example of an application of bioinformatics methods to historical linguistics. In this paper we will not only consider alignments of strings, i.e., words in this case, but also alignments of syntax trees of selected Indo-European languages. Based on initial, crude alignments, a sophisticated scoring model is trained on both letters and syntactic features. The aim is to gain a better understanding on which features in two languages are related, i.e., most likely to have the same root. Initially, all words in two languages are pre-aligned with a basic scoring model that primarily selects consonants and adjusts them before fitting in the vowels. Mixture models are subsequently used to filter ‘good’ alignments depending on the alignment length and the number of inserted gaps. Using these selected word alignments it is possible to perform tree alignments of the given syntax trees and consequently find sentences that correspond rather well to each other across languages. The syntax alignments are then filtered for meaningful scores—’good’ scores contain evolutionary information and are therefore used to train the sophisticated scoring model. Further iterations of alignments and training steps are performed until the scoring model saturates, i.e., barely changes anymore. A better evaluation of the trained scoring model and its function in containing evolutionary meaningful information will be given. An assessment of sentence alignment compared to possible phrase structure will also be provided. The method described here may have its flaws because of limited prior information. This, however, may offer a good starting point to study languages where only little prior knowledge is available and a detailed, unbiased study is needed.

Keywords: alignments, bioinformatics, comparative linguistics, historical linguistics, statistical methods

Procedia PDF Downloads 154
16239 Conceptual and Funnel Methods Contribution to Critical Literature Review: PhD construction Management

Authors: Samuel Quashie

Abstract:

This study is aimed at demonstrating the applicability and contribution of ‘Conceptual and Funnelling Methods’ during the literature review stages, for PhD in Construction Management, which focused on the ‘Development of an Integrated Management for Post-Disaster Reconstruction’, the viability of this approach using conceptual and funnel methods are demonstrated. The ‘conceptual review method’ builds upon the strengths of relevant material, detailing major points and areas covered and evaluates lesser relevant literature. Publications are reviewed in an integrated style, challenging the scientific theory and seeking to develop new insights. The funnel method grouped reviews by commonality, regardless of the topic or thesis statement. It shows that the literature review is acquired using different kinds of information to increase the variety and diversity of the investigation. Results demonstrated conceptual and funnel methods ability to reviewed and appraised the relevant literature. It puts them into an integrated style, allows an evaluation of credentials, originality, theory base, context and significance of the quality work to emerge. Objectives of the review are met and gaps in knowledge are identified and direct further studies to answer the research questions.

Keywords: Ph.D, construction management, critical literature review, conceptual and funnel methods

Procedia PDF Downloads 416
16238 Performance and Limitations of Likelihood Based Information Criteria and Leave-One-Out Cross-Validation Approximation Methods

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Model assessment, in the Bayesian context, involves evaluation of the goodness-of-fit and the comparison of several alternative candidate models for predictive accuracy and improvements. In posterior predictive checks, the data simulated under the fitted model is compared with the actual data. Predictive model accuracy is estimated using information criteria such as the Akaike information criterion (AIC), the Bayesian information criterion (BIC), the Deviance information criterion (DIC), and the Watanabe-Akaike information criterion (WAIC). The goal of an information criterion is to obtain an unbiased measure of out-of-sample prediction error. Since posterior checks use the data twice; once for model estimation and once for testing, a bias correction which penalises the model complexity is incorporated in these criteria. Cross-validation (CV) is another method used for examining out-of-sample prediction accuracy. Leave-one-out cross-validation (LOO-CV) is the most computationally expensive variant among the other CV methods, as it fits as many models as the number of observations. Importance sampling (IS), truncated importance sampling (TIS) and Pareto-smoothed importance sampling (PSIS) are generally used as approximations to the exact LOO-CV and utilise the existing MCMC results avoiding expensive computational issues. The reciprocals of the predictive densities calculated over posterior draws for each observation are treated as the raw importance weights. These are in turn used to calculate the approximate LOO-CV of the observation as a weighted average of posterior densities. In IS-LOO, the raw weights are directly used. In contrast, the larger weights are replaced by their modified truncated weights in calculating TIS-LOO and PSIS-LOO. Although, information criteria and LOO-CV are unable to reflect the goodness-of-fit in absolute sense, the differences can be used to measure the relative performance of the models of interest. However, the use of these measures is only valid under specific circumstances. This study has developed 11 models using normal, log-normal, gamma, and student’s t distributions to improve the PCR stutter prediction with forensic data. These models are comprised of four with profile-wide variances, four with locus specific variances, and three which are two-component mixture models. The mean stutter ratio in each model is modeled as a locus specific simple linear regression against a feature of the alleles under study known as the longest uninterrupted sequence (LUS). The use of AIC, BIC, DIC, and WAIC in model comparison has some practical limitations. Even though, IS-LOO, TIS-LOO, and PSIS-LOO are considered to be approximations of the exact LOO-CV, the study observed some drastic deviations in the results. However, there are some interesting relationships among the logarithms of pointwise predictive densities (lppd) calculated under WAIC and the LOO approximation methods. The estimated overall lppd is a relative measure that reflects the overall goodness-of-fit of the model. Parallel log-likelihood profiles for the models conditional on equal posterior variances in lppds were observed. This study illustrates the limitations of the information criteria in practical model comparison problems. In addition, the relationships among LOO-CV approximation methods and WAIC with their limitations are discussed. Finally, useful recommendations that may help in practical model comparisons with these methods are provided.

Keywords: cross-validation, importance sampling, information criteria, predictive accuracy

Procedia PDF Downloads 393
16237 Similarity Based Membership of Elements to Uncertain Concept in Information System

Authors: M. Kamel El-Sayed

Abstract:

The process of determining the degree of membership for an element to an uncertain concept has been found in many ways, using equivalence and symmetry relations in information systems. In the case of similarity, these methods did not take into account the degree of symmetry between elements. In this paper, we use a new definition for finding the membership based on the degree of symmetry. We provide an example to clarify the suggested methods and compare it with previous methods. This method opens the door to more accurate decisions in information systems.

Keywords: information system, uncertain concept, membership function, similarity relation, degree of similarity

Procedia PDF Downloads 224
16236 Review and Evaluation of Trending Canonical Correlation Analyses-Based Brain Computer Interface Methods

Authors: Bayar Shahab

Abstract:

The fast development of technology that has advanced neuroscience and human interaction with computers has enabled solutions to various problems, and issues of this new era have been found and are being found like no other time in history. Brain-computer interface so-called BCI has opened the door to several new research areas and have been able to provide solutions to critical and important issues such as supporting a paralyzed patient to interact with the outside world, controlling a robot arm, playing games in VR with the brain, driving a wheelchair or even a car and neurotechnology enabled the rehabilitation of the lost memory, etc. This review work presents state-of-the-art methods and improvements of canonical correlation analyses (CCA), which is an SSVEP-based BCI method. These are the methods used to extract EEG signal features or, to be said in a different way, the features of interest that we are looking for in the EEG analyses. Each of the methods from oldest to newest has been discussed while comparing their advantages and disadvantages. This would create a great context and help researchers to understand the most state-of-the-art methods available in this field with their pros and cons, along with their mathematical representations and usage. This work makes a vital contribution to the existing field of study. It differs from other similar recently published works by providing the following: (1) stating most of the prominent methods used in this field in a hierarchical way (2) explaining pros and cons of each method and their performance (3) presenting the gaps that exist at the end of each method that can open the understanding and doors to new research and/or improvements.

Keywords: BCI, CCA, SSVEP, EEG

Procedia PDF Downloads 145
16235 Non-Centrifugal Cane Sugar Production: Heat Transfer Study to Optimize the Use of Energy

Authors: Fabian Velasquez, John Espitia, Henry Hernadez, Sebastian Escobar, Jader Rodriguez

Abstract:

Non-centrifuged cane sugar (NCS) is a concentrated product obtained through the evaporation of water contain from sugarcane juice inopen heat exchangers (OE). The heat supplied to the evaporation stages is obtained from the cane bagasse through the thermochemical process of combustion, where the thermal energy released is transferred to OE by the flue gas. Therefore, the optimization of energy usage becomes essential for the proper design of the production process. For optimize the energy use, it is necessary modeling and simulation of heat transfer between the combustion gases and the juice and to understand the major mechanisms involved in the heat transfer. The main objective of this work was simulated heat transfer phenomena between the flue gas and open heat exchangers using Computational Fluid Dynamics model (CFD). The simulation results were compared to field measured data. Numerical results about temperature profile along the flue gas pipeline at the measurement points are in good accordance with field measurements. Thus, this study could be of special interest in design NCS production process and the optimization of the use of energy.

Keywords: mathematical modeling, design variables, computational fluid dynamics, overall thermal efficiency

Procedia PDF Downloads 125
16234 A Review on Existing Challenges of Data Mining and Future Research Perspectives

Authors: Hema Bhardwaj, D. Srinivasa Rao

Abstract:

Technology for analysing, processing, and extracting meaningful data from enormous and complicated datasets can be termed as "big data." The technique of big data mining and big data analysis is extremely helpful for business movements such as making decisions, building organisational plans, researching the market efficiently, improving sales, etc., because typical management tools cannot handle such complicated datasets. Special computational and statistical issues, such as measurement errors, noise accumulation, spurious correlation, and storage and scalability limitations, are brought on by big data. These unique problems call for new computational and statistical paradigms. This research paper offers an overview of the literature on big data mining, its process, along with problems and difficulties, with a focus on the unique characteristics of big data. Organizations have several difficulties when undertaking data mining, which has an impact on their decision-making. Every day, terabytes of data are produced, yet only around 1% of that data is really analyzed. The idea of the mining and analysis of data and knowledge discovery techniques that have recently been created with practical application systems is presented in this study. This article's conclusion also includes a list of issues and difficulties for further research in the area. The report discusses the management's main big data and data mining challenges.

Keywords: big data, data mining, data analysis, knowledge discovery techniques, data mining challenges

Procedia PDF Downloads 110
16233 Calculation of Orbital Elements for Sending Interplanetary Probes

Authors: Jorge Lus Nisperuza Toledo, Juan Pablo Rubio Ospina, Daniel Santiago Umana, Hector Alejandro Alvarez

Abstract:

This work develops and implements computational codes to calculate the optimal launch trajectories for sending a probe from the earth to different planets of the Solar system, making use of trajectories of the Hohmann and No-Hohmann type and gravitational assistance in intermediate steps. Specifically, the orbital elements, the graphs and the dynamic simulations of the trajectories for sending a probe from the Earth towards the planets Mercury, Venus, Mars, Jupiter, and Saturn are obtained. A detailed study was made of the state vectors of the position and orbital velocity of the considered planets in order to determine the optimal trajectories of the probe. For this purpose, computer codes were developed and implemented to obtain the orbital elements of the Mariner 10 (Mercury), Magellan (Venus), Mars Global Surveyor (Mars) and Voyager 1 (Jupiter and Saturn) missions, as an exercise in corroborating the algorithms. This exercise gives validity to computational codes, allowing to find the orbital elements and the simulations of trajectories of three future interplanetary missions with specific launch windows.

Keywords: gravitational assistance, Hohmann’s trajectories, interplanetary mission, orbital elements

Procedia PDF Downloads 183
16232 TessPy – Spatial Tessellation Made Easy

Authors: Jonas Hamann, Siavash Saki, Tobias Hagen

Abstract:

Discretization of urban areas is a crucial aspect in many spatial analyses. The process of discretization of space into subspaces without overlaps and gaps is called tessellation. It helps understanding spatial space and provides a framework for analyzing geospatial data. Tessellation methods can be divided into two groups: regular tessellations and irregular tessellations. While regular tessellation methods, like squares-grids or hexagons-grids, are suitable for addressing pure geometry problems, they cannot take the unique characteristics of different subareas into account. However, irregular tessellation methods allow the border between the subareas to be defined more realistically based on urban features like a road network or Points of Interest (POI). Even though Python is one of the most used programming languages when it comes to spatial analysis, there is currently no library that combines different tessellation methods to enable users and researchers to compare different techniques. To close this gap, we are proposing TessPy, an open-source Python package, which combines all above-mentioned tessellation methods and makes them easily accessible to everyone. The core functions of TessPy represent the five different tessellation methods: squares, hexagons, adaptive squares, Voronoi polygons, and city blocks. By using regular methods, users can set the resolution of the tessellation which defines the finesse of the discretization and the desired number of tiles. Irregular tessellation methods allow users to define which spatial data to consider (e.g., amenity, building, office) and how fine the tessellation should be. The spatial data used is open-source and provided by OpenStreetMap. This data can be easily extracted and used for further analyses. Besides the methodology of the different techniques, the state-of-the-art, including examples and future work, will be discussed. All dependencies can be installed using conda or pip; however, the former is more recommended.

Keywords: geospatial data science, geospatial data analysis, tessellations, urban studies

Procedia PDF Downloads 128
16231 Weighted Risk Scores Method Proposal for Occupational Safety Risk Assessment

Authors: Ulas Cinar, Omer Faruk Ugurlu, Selcuk Cebi

Abstract:

Occupational safety risk management is the most important element of a safe working environment. Effective risk management can only be possible with accurate analysis and evaluations. Scoring-based risk assessment methods offer considerable ease of application as they convert linguistic expressions into numerical results. It can also be easily adapted to any field. Contrary to all these advantages, important problems in scoring-based methods are frequently discussed. Effective measurability is one of the most critical problems. Existing methods allow experts to choose a score equivalent to each parameter. Therefore, experts prefer the score of the most likely outcome for risk. However, all other possible consequences are neglected. Assessments of the existing methods express the most probable level of risk, not the real risk of the enterprises. In this study, it is aimed to develop a method that will present a more comprehensive evaluation compared to the existing methods by evaluating the probability and severity scores, all sub-parameters, and potential results, and a new scoring-based method is proposed in the literature.

Keywords: occupational health and safety, risk assessment, scoring based risk assessment method, underground mining, weighted risk scores

Procedia PDF Downloads 136
16230 LGG Architecture for Brain Tumor Segmentation Using Convolutional Neural Network

Authors: Sajeeha Ansar, Asad Ali Safi, Sheikh Ziauddin, Ahmad R. Shahid, Faraz Ahsan

Abstract:

The most aggressive form of brain tumor is called glioma. Glioma is kind of tumor that arises from glial tissue of the brain and occurs quite often. A fully automatic 2D-CNN model for brain tumor segmentation is presented in this paper. We performed pre-processing steps to remove noise and intensity variances using N4ITK and standard intensity correction, respectively. We used Keras open-source library with Theano as backend for fast implementation of CNN model. In addition, we used BRATS 2015 MRI dataset to evaluate our proposed model. Furthermore, we have used SimpleITK open-source library in our proposed model to analyze images. Moreover, we have extracted random 2D patches for proposed 2D-CNN model for efficient brain segmentation. Extracting 2D patched instead of 3D due to less dimensional information present in 2D which helps us in reducing computational time. Dice Similarity Coefficient (DSC) is used as performance measure for the evaluation of the proposed method. Our method achieved DSC score of 0.77 for complete, 0.76 for core, 0.77 for enhanced tumor regions. However, these results are comparable with methods already implemented 2D CNN architecture.

Keywords: brain tumor segmentation, convolutional neural networks, deep learning, LGG

Procedia PDF Downloads 183
16229 Application of extraction chromatography to the separation of Sc, Zr and Sn isotopes from target materials

Authors: Steffen Happel

Abstract:

Non-standard isotopes such as Sc-44/47, Zr-89, and Sn-117mare finding interest is increasing in radiopharmaceutical applications. Methods for the separation of these elements from typical target materials were developed. The methods used in this paper are based on the use of extraction chromatographic resins such as UTEVA, TBP, and DGA resin. Information on the selectivity of the resins (Dw values of selected elements in HCl and HNO3 of varying concentration) will be presented as well as results of the method development such as elution studies, chemical recoveries, and decontamination factors. Developed methods are based on the use of vacuum supported separation allowing for fast and selective separation.

Keywords: elution, extraction chromatography, radiopharmacy, decontamination factors

Procedia PDF Downloads 469
16228 Analysis of the Aquifer Vulnerability of a Miopliocene Arid Area Using Drastic and SI Models

Authors: H. Majour, L. Djabri

Abstract:

Many methods in the groundwater vulnerability have been developed in the world (methods like PRAST, DRIST, APRON/ARAA, PRASTCHIM, GOD). In this study, our choice dealt with two recent complementary methods using category mapping of index with weighting criteria (Point County Systems Model MSCP) namely the standard DRASTIC method and SI (Susceptibility Index). At present, these two methods are the most used for the mapping of the intrinsic vulnerability of groundwater. Two classes of groundwater vulnerability in the Biskra sandy aquifer were identified by the DRASTIC method (average and high) and the SI method (very high and high). Integrated analysis has revealed that the high class is predominant for the DRASTIC method whereas for that of SI the preponderance is for the very high class. Furthermore, we notice that the method SI estimates better the vulnerability for the pollution in nitrates, with a rate of 85 % between the concentrations in nitrates of groundwater and the various established classes of vulnerability, against 75 % for the DRASTIC method. By including the land use parameter, the SI method produced more realistic results.

Keywords: DRASTIC, SI, GIS, Biskra sandy aquifer, Algeria

Procedia PDF Downloads 488
16227 An Integrative Computational Pipeline for Detection of Tumor Epitopes in Cancer Patients

Authors: Tanushree Jaitly, Shailendra Gupta, Leila Taher, Gerold Schuler, Julio Vera

Abstract:

Genomics-based personalized medicine is a promising approach to fight aggressive tumors based on patient's specific tumor mutation and expression profiles. A remarkable case is, dendritic cell-based immunotherapy, in which tumor epitopes targeting patient's specific mutations are used to design a vaccine that helps in stimulating cytotoxic T cell mediated anticancer immunity. Here we present a computational pipeline for epitope-based personalized cancer vaccines using patient-specific haplotype and cancer mutation profiles. In the workflow proposed, we analyze Whole Exome Sequencing and RNA Sequencing patient data to detect patient-specific mutations and their expression level. Epitopes including the tumor mutations are computationally predicted using patient's haplotype and filtered based on their expression level, binding affinity, and immunogenicity. We calculate binding energy for each filtered major histocompatibility complex (MHC)-peptide complex using docking studies, and use this feature to select good epitope candidates further.

Keywords: cancer immunotherapy, epitope prediction, NGS data, personalized medicine

Procedia PDF Downloads 254
16226 Modified Genome-Scale Metabolic Model of Escherichia coli by Adding Hyaluronic Acid Biosynthesis-Related Enzymes (GLMU2 and HYAD) from Pasteurella multocida

Authors: P. Pasomboon, P. Chumnanpuen, T. E-kobon

Abstract:

Hyaluronic acid (HA) consists of linear heteropolysaccharides repeat of D-glucuronic acid and N-acetyl-D-glucosamine. HA has various useful properties to maintain skin elasticity and moisture, reduce inflammation, and lubricate the movement of various body parts without causing immunogenic allergy. HA can be found in several animal tissues as well as in the capsule component of some bacteria including Pasteurella multocida. This study aimed to modify a genome-scale metabolic model of Escherichia coli using computational simulation and flux analysis methods to predict HA productivity under different carbon sources and nitrogen supplement by the addition of two enzymes (GLMU2 and HYAD) from P. multocida to improve the HA production under the specified amount of carbon sources and nitrogen supplements. Result revealed that threonine and aspartate supplement raised the HA production by 12.186%. Our analyses proposed the genome-scale metabolic model is useful for improving the HA production and narrows the number of conditions to be tested further.

Keywords: Pasteurella multocida, Escherichia coli, hyaluronic acid, genome-scale metabolic model, bioinformatics

Procedia PDF Downloads 123
16225 Off-Line Detection of "Pannon Wheat" Milling Fractions by Near-Infrared Spectroscopic Methods

Authors: E. Izsó, M. Bartalné-Berceli, Sz. Gergely, A. Salgó

Abstract:

The aims of this investigation is to elaborate near-infrared methods for testing and recognition of chemical components and quality in “Pannon wheat” allied (i.e. true to variety or variety identified) milling fractions as well as to develop spectroscopic methods following the milling processes and evaluate the stability of the milling technology by different types of milling products and according to sampling times, respectively. This wheat categories produced under industrial conditions where samples were collected versus sampling time and maximum or minimum yields. The changes of the main chemical components (such as starch, protein, lipid) and physical properties of fractions (particle size) were analysed by dispersive spectrophotometers using visible (VIS) and near-infrared (NIR) regions of the electromagnetic radiation. Close correlation were obtained between the data of spectroscopic measurement techniques processed by various chemometric methods (e.g. principal component analysis (PCA), cluster analysis (CA) and operation condition of milling technology. Its obvious that NIR methods are able to detect the deviation of the yield parameters and differences of the sampling times by a wide variety of fractions, respectively. NIR technology can be used in the sensitive monitoring of milling technology.

Keywords: near infrared spectroscopy, wheat categories, milling process, monitoring

Procedia PDF Downloads 407
16224 Damage Assessment and Repair for Older Brick Buildings

Authors: Tim D. Sass

Abstract:

The experience of engineers and architects practicing today is typically limited to current building code requirements and modern construction methods and materials. However, many cities have a mix of new and old buildings with many buildings constructed over one hundred years ago when building codes and construction methods were much different. When a brick building sustains damage, a structural engineer is often hired to determine the cause of damage as well as determine the necessary repairs. Forensic studies of dozens of brick buildings shows an appreciation of historical building methods and materials is needed to correctly identify the cause of damage and design an appropriate repair. Damage on an older, brick building can be mistakenly attributed to storms or seismic events when the real source of the damage is deficient original construction. Assessing and remediating damaged brickwork on older brick buildings requires an understanding of the original construction, an understanding of older repair methods, and, an understanding of current building code requirements.

Keywords: brick, damage, deterioration, facade

Procedia PDF Downloads 227
16223 Concept of a Pseudo-Lower Bound Solution for Reinforced Concrete Slabs

Authors: M. De Filippo, J. S. Kuang

Abstract:

In construction industry, reinforced concrete (RC) slabs represent fundamental elements of buildings and bridges. Different methods are available for analysing the structural behaviour of slabs. In the early ages of last century, the yield-line method has been proposed to attempt to solve such problem. Simple geometry problems could easily be solved by using traditional hand analyses which include plasticity theories. Nowadays, advanced finite element (FE) analyses have mainly found their way into applications of many engineering fields due to the wide range of geometries to which they can be applied. In such cases, the application of an elastic or a plastic constitutive model would completely change the approach of the analysis itself. Elastic methods are popular due to their easy applicability to automated computations. However, elastic analyses are limited since they do not consider any aspect of the material behaviour beyond its yield limit, which turns to be an essential aspect of RC structural performance. Furthermore, their applicability to non-linear analysis for modeling plastic behaviour gives very reliable results. Per contra, this type of analysis is computationally quite expensive, i.e. not well suited for solving daily engineering problems. In the past years, many researchers have worked on filling this gap between easy-to-implement elastic methods and computationally complex plastic analyses. This paper aims at proposing a numerical procedure, through which a pseudo-lower bound solution, not violating the yield criterion, is achieved. The advantages of moment distribution are taken into account, hence the increase in strength provided by plastic behaviour is considered. The lower bound solution is improved by detecting over-yielded moments, which are used to artificially rule the moment distribution among the rest of the non-yielded elements. The proposed technique obeys Nielsen’s yield criterion. The outcome of this analysis provides a simple, yet accurate, and non-time-consuming tool of predicting the lower-bound solution of the collapse load of RC slabs. By using this method, structural engineers can find the fracture patterns and ultimate load bearing capacity. The collapse triggering mechanism is found by detecting yield-lines. An application to the simple case of a square clamped slab is shown, and a good match was found with the exact values of collapse load.

Keywords: computational mechanics, lower bound method, reinforced concrete slabs, yield-line

Procedia PDF Downloads 179
16222 Modeling and Simulation of Textile Effluent Treatment Using Ultrafiltration Membrane Technology

Authors: Samia Rabet, Rachida Chemini, Gerhard Schäfer, Farid Aiouache

Abstract:

The textile industry generates large quantities of wastewater, which poses significant environmental problems due to its complex composition and high levels of pollutants loaded principally with heavy metals, large amounts of COD, and dye. Separation treatment methods are often known for their effectiveness in removing contaminants whereas membrane separation techniques are a promising process for the treatment of textile effluent due to their versatility, efficiency, and low energy requirements. This study focuses on the modeling and simulation of membrane separation technologies with a cross-flow filtration process for textile effluent treatment. It aims to explore the application of mathematical models and computational simulations using ASPEN Plus Software in the prediction of a complex and real effluent separation. The results demonstrate the effectiveness of modeling and simulation techniques in predicting pollutant removal efficiencies with a global deviation percentage of 1.83% between experimental and simulated results; membrane fouling behavior, and overall process performance (hydraulic resistance, membrane porosity) were also estimated and indicating that the membrane losses 10% of its efficiency after 40 min of working.

Keywords: membrane separation, ultrafiltration, textile effluent, modeling, simulation

Procedia PDF Downloads 59
16221 Sinusoidal Roughness Elements in a Square Cavity

Authors: Muhammad Yousaf, Shoaib Usman

Abstract:

Numerical studies were conducted using Lattice Boltzmann Method (LBM) to study the natural convection in a square cavity in the presence of roughness. An algorithm basedon a single relaxation time Bhatnagar-Gross-Krook (BGK) model of Lattice Boltzmann Method (LBM) was developed. Roughness was introduced on both the hot and cold walls in the form of sinusoidal roughness elements. The study was conducted for a Newtonian fluid of Prandtl number (Pr) 1.0. The range of Ra number was explored from 103 to 106 in a laminar region. Thermal and hydrodynamic behavior of fluid was analyzed using a differentially heated square cavity with roughness elements present on both the hot and cold wall. Neumann boundary conditions were introduced on horizontal walls with vertical walls as isothermal. The roughness elements were at the same boundary condition as corresponding walls. Computational algorithm was validated against previous benchmark studies performed with different numerical methods, and a good agreement was found to exist. Results indicate that the maximum reduction in the average heat transfer was16.66 percent at Ra number 105.

Keywords: Lattice Boltzmann method, natural convection, nusselt number, rayleigh number, roughness

Procedia PDF Downloads 528