Search results for: mathematical algorithms of targeting and persecution
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4251

Search results for: mathematical algorithms of targeting and persecution

2301 Analyzing the Sociolinguistic Profile of the Algerian Community in the UK in terms of French Language Use: The Case of Émigré Ph.D. Students

Authors: Hadjer Chellia

Abstract:

the present study reports on second language use among Algerian international students in the UK. In Algeria, French has an important status among the Algerian verbal repertoires due to colonial reasons. This has triggered many language conflicts and many debates among policy makers in Algeria. In higher education, Algerian English students’ sociolinguistic profile is characterised by the use of French as a sign of prestige. What may leave room for debate is the effect of crossing borders towards the UK as a result of international mobility programmes, a transition which could add more complexity since French, is not so significant as a language in the UK context. In this respect, the micro-objective is to explore the fate of French use among Ph.D. students in the UK as a newly established group vis-à-vis English. To fulfill the purpose of the present inquiry, the research employs multiple approaches in which semi-structured interview is a primary source of data to know participants’ attitudes about French use, targeting both their pre-migratory experience and current one. Web-based questionnaires are set up to access larger population. Focus group sessions are further procedures of scrutiny in this piece of work to explore the actual linguistic behaviours. Preliminary findings from both interviews and questionnaires reveal that students’ current experience, particularly living in the UK, affects their pre-migratory attitudes towards French language and its use. The overall findings are expected to bring manifold contributions to the field of research among which is setting factors that influence language use among newly established émigrés communities. The research is also relevant to international students’ experience of study abroad in terms of language use in the guise of internationalization of higher education, mobility and exchange programmes. It could contribute to the sociolinguistics of the Algerian diaspora: the dispersed residence of non-native communities - not to mention its significance on the Algerian research field abroad.

Keywords: Algerian diaspora, French language, language maintenance, language shift, mobility

Procedia PDF Downloads 316
2300 Adapting an Accurate Reverse-time Migration Method to USCT Imaging

Authors: Brayden Mi

Abstract:

Reverse time migration has been widely used in the Petroleum exploration industry to reveal subsurface images and to detect rock and fluid properties since the early 1980s. The seismic technology involves the construction of a velocity model through interpretive model construction, seismic tomography, or full waveform inversion, and the application of the reverse-time propagation of acquired seismic data and the original wavelet used in the acquisition. The methodology has matured from 2D, simple media to present-day to handle full 3D imaging challenges in extremely complex geological conditions. Conventional Ultrasound computed tomography (USCT) utilize travel-time-inversion to reconstruct the velocity structure of an organ. With the velocity structure, USCT data can be migrated with the “bend-ray” method, also known as migration. Its seismic application counterpart is called Kirchhoff depth migration, in which the source of reflective energy is traced by ray-tracing and summed to produce a subsurface image. It is well known that ray-tracing-based migration has severe limitations in strongly heterogeneous media and irregular acquisition geometries. Reverse time migration (RTM), on the other hand, fully accounts for the wave phenomena, including multiple arrives and turning rays due to complex velocity structure. It has the capability to fully reconstruct the image detectable in its acquisition aperture. The RTM algorithms typically require a rather accurate velocity model and demand high computing powers, and may not be applicable to real-time imaging as normally required in day-to-day medical operations. However, with the improvement of computing technology, such a computational bottleneck may not present a challenge in the near future. The present-day (RTM) algorithms are typically implemented from a flat datum for the seismic industry. It can be modified to accommodate any acquisition geometry and aperture, as long as sufficient illumination is provided. Such flexibility of RTM can be conveniently implemented for the application in USCT imaging if the spatial coordinates of the transmitters and receivers are known and enough data is collected to provide full illumination. This paper proposes an implementation of a full 3D RTM algorithm for USCT imaging to produce an accurate 3D acoustic image based on the Phase-shift-plus-interpolation (PSPI) method for wavefield extrapolation. In this method, each acquired data set (shot) is propagated back in time, and a known ultrasound wavelet is propagated forward in time, with PSPI wavefield extrapolation and a piece-wise constant velocity model of the organ (breast). The imaging condition is then applied to produce a partial image. Although each image is subject to the limitation of its own illumination aperture, the stack of multiple partial images will produce a full image of the organ, with a much-reduced noise level if compared with individual partial images.

Keywords: illumination, reverse time migration (RTM), ultrasound computed tomography (USCT), wavefield extrapolation

Procedia PDF Downloads 57
2299 A Modified NSGA-II Algorithm for Solving Multi-Objective Flexible Job Shop Scheduling Problem

Authors: Aydin Teymourifar, Gurkan Ozturk, Ozan Bahadir

Abstract:

NSGA-II is one of the most well-known and most widely used evolutionary algorithms. In addition to its new versions, such as NSGA-III, there are several modified types of this algorithm in the literature. In this paper, a hybrid NSGA-II algorithm has been suggested for solving the multi-objective flexible job shop scheduling problem. For a better search, new neighborhood-based crossover and mutation operators are defined. To create new generations, the neighbors of the selected individuals by the tournament selection are constructed. Also, at the end of each iteration, before sorting, neighbors of a certain number of good solutions are derived, except for solutions protected by elitism. The neighbors are generated using a constraint-based neural network that uses various constructs. The non-dominated sorting and crowding distance operators are same as the classic NSGA-II. A comparison based on some multi-objective benchmarks from the literature shows the efficiency of the algorithm.

Keywords: flexible job shop scheduling problem, multi-objective optimization, NSGA-II algorithm, neighborhood structures

Procedia PDF Downloads 215
2298 A Multi-Objective Evolutionary Algorithm of Neural Network for Medical Diseases Problems

Authors: Sultan Noman Qasem

Abstract:

This paper presents an evolutionary algorithm for solving multi-objective optimization problems-based artificial neural network (ANN). The multi-objective evolutionary algorithm used in this study is genetic algorithm while ANN used is radial basis function network (RBFN). The proposed algorithm named memetic elitist Pareto non-dominated sorting genetic algorithm-based RBFNN (MEPGAN). The proposed algorithm is implemented on medical diseases problems. The experimental results indicate that the proposed algorithm is viable, and provides an effective means to design multi-objective RBFNs with good generalization capability and compact network structure. This study shows that MEPGAN generates RBFNs coming with an appropriate balance between accuracy and simplicity, comparing to the other algorithms found in literature.

Keywords: radial basis function network, hybrid learning, multi-objective optimization, genetic algorithm

Procedia PDF Downloads 544
2297 Optimization of Vertical Axis Wind Turbine Based on Artificial Neural Network

Authors: Mohammed Affanuddin H. Siddique, Jayesh S. Shukla, Chetan B. Meshram

Abstract:

The neural networks are one of the power tools of machine learning. After the invention of perceptron in early 1980's, the neural networks and its application have grown rapidly. Neural networks are a technique originally developed for pattern investigation. The structure of a neural network consists of neurons connected through synapse. Here, we have investigated the different algorithms and cost function reduction techniques for optimization of vertical axis wind turbine (VAWT) rotor blades. The aerodynamic force coefficients corresponding to the airfoils are stored in a database along with the airfoil coordinates. A forward propagation neural network is created with the input as aerodynamic coefficients and output as the airfoil co-ordinates. In the proposed algorithm, the hidden layer is incorporated into cost function having linear and non-linear error terms. In this article, it is observed that the ANNs (Artificial Neural Network) can be used for the VAWT’s optimization.

Keywords: VAWT, ANN, optimization, inverse design

Procedia PDF Downloads 303
2296 Stacking Ensemble Approach for Combining Different Methods in Real Estate Prediction

Authors: Sol Girouard, Zona Kostic

Abstract:

A home is often the largest and most expensive purchase a person makes. Whether the decision leads to a successful outcome will be determined by a combination of critical factors. In this paper, we propose a method that efficiently handles all the factors in residential real estate and performs predictions given a feature space with high dimensionality while controlling for overfitting. The proposed method was built on gradient descent and boosting algorithms and uses a mixed optimizing technique to improve the prediction power. Usually, a single model cannot handle all the cases thus our approach builds multiple models based on different subsets of the predictors. The algorithm was tested on 3 million homes across the U.S., and the experimental results demonstrate the efficiency of this approach by outperforming techniques currently used in forecasting prices. With everyday changes on the real estate market, our proposed algorithm capitalizes from new events allowing more efficient predictions.

Keywords: real estate prediction, gradient descent, boosting, ensemble methods, active learning, training

Procedia PDF Downloads 261
2295 Hybrid Algorithm for Frequency Channel Selection in Wi-Fi Networks

Authors: Cesar Hernández, Diego Giral, Ingrid Páez

Abstract:

This article proposes a hybrid algorithm for spectrum allocation in cognitive radio networks based on the algorithms Analytical Hierarchical Process (AHP) and Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) to improve the performance of the spectrum mobility of secondary users in cognitive radio networks. To calculate the level of performance of the proposed algorithm a comparative analysis between the proposed AHP-TOPSIS, Grey Relational Analysis (GRA) and Multiplicative Exponent Weighting (MEW) algorithm is performed. Four evaluation metrics is used. These metrics are the accumulative average of failed handoffs, the accumulative average of handoffs performed, the accumulative average of transmission bandwidth, and the accumulative average of the transmission delay. The results of the comparison show that AHP-TOPSIS Algorithm provides 2.4 times better performance compared to a GRA Algorithm and, 1.5 times better than the MEW Algorithm.

Keywords: cognitive radio, decision making, hybrid algorithm, spectrum handoff, wireless networks

Procedia PDF Downloads 523
2294 Combinational Therapeutic Targeting of BRD4 and CDK7 Synergistically Induces Anticancer Effects in Hepatocellular Carcinoma

Authors: Xinxiu Li, Chuqian Zheng, Yanyan Qian, Hong Fan

Abstract:

Objectives: In hepatocellular carcinoma (HCC), oncogenes are continuously and robustly transcribed due to aberrant expression of essential components of the trans-acting super-enhancers (SE) complex. Preclinical and clinical trials are now being conducted on small-molecule inhibitors that target core-transcriptional components, including as transcriptional bromodomain protein 4 (BRD4) and cyclin-dependent kinase 7 (CDK7), in a number of malignant tumors. This study aims to explore whether co-overexpression of BRD4 and CDK7 is a potential marker of worse prognosis and a combined therapeutic target in HCC. Methods: The expression pattern of BRD4 and CDK7 and their correlation with prognosis in HCC were analyzed by RNA sequencing data and survival data of HCC patients from TCGA and GEO datasets. The protein levels of BRD4 and CDK7 were determined by immunohistochemistry (IHC), and survival data of patients were analyzed using the Kaplan-Meier method. The mRNA expression levels of genes in HCC cell lines were evaluated by quantitative PCR (q-PCR). CCK-8 and colony formation assays were conducted to assess cell proliferation of HCC upon treatment with BRD4 inhibitor JQ1 or/and CDK7 inhibitor THZ1. Results: It was shown that BRD4 and CDK7 were often overexpressed in HCCs and were associated with poor prognosis of HCC by analyzing the TCGA and GEO datasets. BRD4 or CDK7 overexpression was related to a lower survival rate. It's interesting to note that co-overexpression of CDK7 and BRD4 was a worse prognostic factor in HCC. Treatment with JQ1 or THZ1 alone had an inhibitory effect on cell proliferation; however, when JQ1 and THZ1 were combined, there was a more notable suppression of cell growth. At the same time, the combined use of JQ1 and THZ1 synergistically suppresses the expression of HCC driver genes. Conclusion: Our research revealed that BRD4 and CDK7 coupled can be a useful biomarker in HCC prognosis and the combination of JQ1 and THZ1 can be a promising therapeutic therapy against HCC.

Keywords: BRD4, CDK7, cell proliferation, combined inhibition

Procedia PDF Downloads 43
2293 Synthesis of 5'-Azidonucleosides as Building Blocks for the Preparation of Biologically Active Bioconjugates

Authors: Brigitta Bodnár, Lajos Kovács, Zoltán Kupihár

Abstract:

The cancer cells require higher amount of nucleoside building blocks for their proliferation, therefore they have significantly higher uptake of nucleosides by the different nucleoside transporters. Therefore, the conjugation with nucleosides may significantly increase the efficiency and selectivity of potential active pharmaceutical ingredients. On the other hand, the advantage of using a nucleoside could be either the higher activity on targeted enzymes overrepresented in cancer cells or an enhanced cellular uptake of the bioconjugates in these cells compared to the healthy ones. This fact can be used to make the nucleosides, as targeting moieties covalently bound to anti-cancer drug molecules which can selectively accumulate in cancer cells. However, in order to form the nucleoside-drug conjugates, such nucleoside building blocks are needed, which can selectively be coupled to the drug molecules containing even a high number of diverse functional groups. One of the most selective conjugation techniques is the copper-catalyzed azide-alkyne click reaction that requires the presence of an alkyl group on one of the conjugated molecules and an azide group on the other. In case of nucleosides, the development of azide group is simpler for which the replacement of the 5'-hydroxy group is the most suitable. This transformation generally involves many side reactions and result in very low yields. In addition, during our experiments, the transformation of the 2'-deoxyguanosine to the corresponding 5'-deoxy-5’-azido-2’-deoxyguanosine could not be performed with any of the methods described in the literature. Therefore, we have tried to overcome these difficulties with not only using the traditional process based on the 2 step exchange of tosyl to azide, but also using the Mitsunobu reaction which requires only one step. However, this path proved to be unsuccessful in spite of the optimizing the reaction conditions. Finally, a method has been developed whereby the azide groups were incorporated into the 5’-position resulting in significantly better yields compared to all other previous methods, and we were able to produce all the four nucleoside derivatives.

Keywords: 5'-azidonucleosides, bioconjugate, click reaction, proliferation

Procedia PDF Downloads 229
2292 Message Framework for Disaster Management: An Application Model for Mines

Authors: A. Baloglu, A. Çınar

Abstract:

Different tools and technologies were implemented for Crisis Response and Management (CRM) which is generally using available network infrastructure for information exchange. Depending on type of disaster or crisis, network infrastructure could be affected and it could not be able to provide reliable connectivity. Thus any tool or technology that depends on the connectivity could not be able to fulfill its functionalities. As a solution, a new message exchange framework has been developed. Framework provides offline/online information exchange platform for CRM Information Systems (CRMIS) and it uses XML compression and packet prioritization algorithms and is based on open source web technologies. By introducing offline capabilities to the web technologies, framework will be able to perform message exchange on unreliable networks. The experiments done on the simulation environment provide promising results on low bandwidth networks (56kbps and 28.8 kbps) with up to 50% packet loss and the solution is to successfully transfer all the information on these low quality networks where the traditional 2 and 3 tier applications failed.

Keywords: crisis response and management, XML messaging, web services, XML compression, mining

Procedia PDF Downloads 325
2291 A Systems Approach to Targeting Cyclooxygenase: Genomics, Bioinformatics and Metabolomics Analysis of COX-1 -/- and COX-2-/- Lung Fibroblasts Providing Indication of Sterile Inflammation

Authors: Abul B. M. M. K. Islam, Mandar Dave, Roderick V. Jensen, Ashok R. Amin

Abstract:

A systems approach was applied to characterize differentially expressed transcripts, bioinformatics pathways, and proteins and prostaglandins (PGs) from lung fibroblasts procured from wild-type (WT), COX-1-/- and COX-2-/- mice to understand system level control mechanism. Bioinformatics analysis of COX-2 and COX-1 ablated cells induced COX-1 and COX-2 specific signature respectively, which significantly overlapped with an 'IL-1β induced inflammatory signature'. This defined novel cross-talk signals that orchestrated coordinated activation of pathways of sterile inflammation sensed by cellular stress. The overlapping signals showed significant over-representation of shared pathways for interferon y and immune responses, T cell functions, NOD, and toll-like receptor signaling. Gene Ontology Biological Process (GOBP) and pathway enrichment analysis specifically showed an increase in mRNA expression associated with: (a) organ development and homeostasis in COX-1-/- cells and (b) oxidative stress and response, spliceosomes and proteasomes activity, mTOR and p53 signaling in COX-2-/- cells. COX-1 and COX-2 showed signs of functional pathways committed to cell cycle and DNA replication at the genomics level. As compared to WT, metabolomics analysis revealed a significant increase in COX-1 mRNA and synthesis of basal levels of eicosanoids (PGE2, PGD2, TXB2, LTB4, PGF1α, and PGF2α) in COX-2 ablated cells and increase in synthesis of PGE2, and PGF1α in COX-1 null cells. There was a compensation of PGE2 and PGF1α in COX-1-/- and COX-2-/- cells. Collectively, these results support a broader, differential and collaborative regulation of both COX-1 and COX-2 pathways at the metabolic, signaling, and genomics levels in cellular homeostasis and sterile inflammation induced by cellular stress.

Keywords: cyclooxygenases, inflammation, lung fibroblasts, systemic

Procedia PDF Downloads 278
2290 Computer-Integrated Surgery of the Human Brain, New Possibilities

Authors: Ugo Galvanetto, Pirto G. Pavan, Mirco Zaccariotto

Abstract:

The discipline of Computer-integrated surgery (CIS) will provide equipment able to improve the efficiency of healthcare systems and, which is more important, clinical results. Surgeons and machines will cooperate in new ways that will extend surgeons’ ability to train, plan and carry out surgery. Patient specific CIS of the brain requires several steps: 1 - Fast generation of brain models. Based on image recognition of MR images and equipped with artificial intelligence, image recognition techniques should differentiate among all brain tissues and segment them. After that, automatic mesh generation should create the mathematical model of the brain in which the various tissues (white matter, grey matter, cerebrospinal fluid …) are clearly located in the correct positions. 2 – Reliable and fast simulation of the surgical process. Computational mechanics will be the crucial aspect of the entire procedure. New algorithms will be used to simulate the mechanical behaviour of cutting through cerebral tissues. 3 – Real time provision of visual and haptic feedback A sophisticated human-machine interface based on ergonomics and psychology will provide the feedback to the surgeon. The present work will address in particular point 2. Modelling the cutting of soft tissue in a structure as complex as the human brain is an extremely challenging problem in computational mechanics. The finite element method (FEM), that accurately represents complex geometries and accounts for material and geometrical nonlinearities, is the most used computational tool to simulate the mechanical response of soft tissues. However, the main drawback of FEM lies in the mechanics theory on which it is based, classical continuum Mechanics, which assumes matter is a continuum with no discontinuity. FEM must resort to complex tools such as pre-defined cohesive zones, external phase-field variables, and demanding remeshing techniques to include discontinuities. However, all approaches to equip FEM computational methods with the capability to describe material separation, such as interface elements with cohesive zone models, X-FEM, element erosion, phase-field, have some drawbacks that make them unsuitable for surgery simulation. Interface elements require a-priori knowledge of crack paths. The use of XFEM in 3D is cumbersome. Element erosion does not conserve mass. The Phase Field approach adopts a diffusive crack model instead of describing true tissue separation typical of surgical procedures. Modelling discontinuities, so difficult when using computational approaches based on classical continuum Mechanics, is instead easy for novel computational methods based on Peridynamics (PD). PD is a non-local theory of mechanics formulated with no use of spatial derivatives. Its governing equations are valid at points or surfaces of discontinuity, and it is, therefore especially suited to describe crack propagation and fragmentation problems. Moreover, PD does not require any criterium to decide the direction of crack propagation or the conditions for crack branching or coalescence; in the PD-based computational methods, cracks develop spontaneously in the way which is the most convenient from an energy point of view. Therefore, in PD computational methods, crack propagation in 3D is as easy as it is in 2D, with a remarkable advantage with respect to all other computational techniques.

Keywords: computational mechanics, peridynamics, finite element, biomechanics

Procedia PDF Downloads 59
2289 Comparison of Artificial Neural Networks and Statistical Classifiers in Olive Sorting Using Near-Infrared Spectroscopy

Authors: İsmail Kavdır, M. Burak Büyükcan, Ferhat Kurtulmuş

Abstract:

Table olive is a valuable product especially in Mediterranean countries. It is usually consumed after some fermentation process. Defects happened naturally or as a result of an impact while olives are still fresh may become more distinct after processing period. Defected olives are not desired both in table olive and olive oil industries as it will affect the final product quality and reduce market prices considerably. Therefore it is critical to sort table olives before processing or even after processing according to their quality and surface defects. However, doing manual sorting has many drawbacks such as high expenses, subjectivity, tediousness and inconsistency. Quality criterions for green olives were accepted as color and free of mechanical defects, wrinkling, surface blemishes and rotting. In this study, it was aimed to classify fresh table olives using different classifiers and NIR spectroscopy readings and also to compare the classifiers. For this purpose, green (Ayvalik variety) olives were classified based on their surface feature properties such as defect-free, with bruised defect and with fly defect using FT-NIR spectroscopy and classification algorithms such as artificial neural networks, ident and cluster. Bruker multi-purpose analyzer (MPA) FT-NIR spectrometer (Bruker Optik, GmbH, Ettlingen Germany) was used for spectral measurements. The spectrometer was equipped with InGaAs detectors (TE-InGaAs internal for reflectance and RT-InGaAs external for transmittance) and a 20-watt high intensity tungsten–halogen NIR light source. Reflectance measurements were performed with a fiber optic probe (type IN 261) which covered the wavelengths between 780–2500 nm, while transmittance measurements were performed between 800 and 1725 nm. Thirty-two scans were acquired for each reflectance spectrum in about 15.32 s while 128 scans were obtained for transmittance in about 62 s. Resolution was 8 cm⁻¹ for both spectral measurement modes. Instrument control was done using OPUS software (Bruker Optik, GmbH, Ettlingen Germany). Classification applications were performed using three classifiers; Backpropagation Neural Networks, ident and cluster classification algorithms. For these classification applications, Neural Network tool box in Matlab, ident and cluster modules in OPUS software were used. Classifications were performed considering different scenarios; two quality conditions at once (good vs bruised, good vs fly defect) and three quality conditions at once (good, bruised and fly defect). Two spectrometer readings were used in classification applications; reflectance and transmittance. Classification results obtained using artificial neural networks algorithm in discriminating good olives from bruised olives, from olives with fly defect and from the olive group including both bruised and fly defected olives with success rates respectively changing between 97 and 99%, 61 and 94% and between 58.67 and 92%. On the other hand, classification results obtained for discriminating good olives from bruised ones and also for discriminating good olives from fly defected olives using the ident method ranged between 75-97.5% and 32.5-57.5%, respectfully; results obtained for the same classification applications using the cluster method ranged between 52.5-97.5% and between 22.5-57.5%.

Keywords: artificial neural networks, statistical classifiers, NIR spectroscopy, reflectance, transmittance

Procedia PDF Downloads 232
2288 The Load Balancing Algorithm for the Star Interconnection Network

Authors: Ahmad M. Awwad, Jehad Al-Sadi

Abstract:

The star network is one of the promising interconnection networks for future high speed parallel computers, it is expected to be one of the future-generation networks. The star network is both edge and vertex symmetry, it was shown to have many gorgeous topological proprieties also it is owns hierarchical structure framework. Although much of the research work has been done on this promising network in literature, it still suffers from having enough algorithms for load balancing problem. In this paper we try to work on this issue by investigating and proposing an efficient algorithm for load balancing problem for the star network. The proposed algorithm is called Star Clustered Dimension Exchange Method SCDEM to be implemented on the star network. The proposed algorithm is based on the Clustered Dimension Exchange Method (CDEM). The SCDEM algorithm is shown to be efficient in redistributing the load balancing as evenly as possible among all nodes of different factor networks.

Keywords: load balancing, star network, interconnection networks, algorithm

Procedia PDF Downloads 304
2287 Active Surface Tracking Algorithm for All-Fiber Common-Path Fourier-Domain Optical Coherence Tomography

Authors: Bang Young Kim, Sang Hoon Park, Chul Gyu Song

Abstract:

A conventional optical coherence tomography (OCT) system has limited imaging depth, which is 1-2 mm, and suffers unwanted noise such as speckle noise. The motorized-stage-based OCT system, using a common-path Fourier-domain optical coherence tomography (CP-FD-OCT) configuration, provides enhanced imaging depth and less noise so that we can overcome these limitations. Using this OCT systems, OCT images were obtained from an onion, and their subsurface structure was observed. As a result, the images obtained using the developed motorized-stage-based system showed enhanced imaging depth than the conventional system, since it is real-time accurate depth tracking. Consequently, the developed CP-FD-OCT systems and algorithms have good potential for the further development of endoscopic OCT for microsurgery.

Keywords: common-path OCT, FD-OCT, OCT, tracking algorithm

Procedia PDF Downloads 368
2286 Mobile Platform’s Attitude Determination Based on Smoothed GPS Code Data and Carrier-Phase Measurements

Authors: Mohamed Ramdani, Hassen Abdellaoui, Abdenour Boudrassen

Abstract:

Mobile platform’s attitude estimation approaches mainly based on combined positioning techniques and developed algorithms; which aim to reach a fast and accurate solution. In this work, we describe the design and the implementation of an attitude determination (AD) process, using only measurements from GPS sensors. The major issue is based on smoothed GPS code data using Hatch filter and raw carrier-phase measurements integrated into attitude algorithm based on vectors measurement using least squares (LSQ) estimation method. GPS dataset from a static experiment is used to investigate the effectiveness of the presented approach and consequently to check the accuracy of the attitude estimation algorithm. Attitude results from GPS multi-antenna over short baselines are introduced and analyzed. The 3D accuracy of estimated attitude parameters using smoothed measurements is over 0.27°.

Keywords: attitude determination, GPS code data smoothing, hatch filter, carrier-phase measurements, least-squares attitude estimation

Procedia PDF Downloads 142
2285 Empowering a New Frontier in Heart Disease Detection: Unleashing Quantum Machine Learning

Authors: Sadia Nasrin Tisha, Mushfika Sharmin Rahman, Javier Orduz

Abstract:

Machine learning is applied in a variety of fields throughout the world. The healthcare sector has benefited enormously from it. One of the most effective approaches for predicting human heart diseases is to use machine learning applications to classify data and predict the outcome as a classification. However, with the rapid advancement of quantum technology, quantum computing has emerged as a potential game-changer for many applications. Quantum algorithms have the potential to execute substantially faster than their classical equivalents, which can lead to significant improvements in computational performance and efficiency. In this study, we applied quantum machine learning concepts to predict coronary heart diseases from text data. We experimented thrice with three different features; and three feature sets. The data set consisted of 100 data points. We pursue to do a comparative analysis of the two approaches, highlighting the potential benefits of quantum machine learning for predicting heart diseases.

Keywords: quantum machine learning, SVM, QSVM, matrix product state

Procedia PDF Downloads 75
2284 An Insight into Early Stage Detection of Malignant Tumor by Microwave Imaging

Authors: Muhammad Hassan Khalil, Xu Jiadong

Abstract:

Detection of malignant tumor inside the breast of women is a challenging field for the researchers. MWI (Microwave imaging) for breast cancer diagnosis has been of interest for last two decades, newly it suggested for finding cancerous tissues of women breast. A simple and basic idea of the mathematical modeling is used throughout this paper for imaging of malignant tumor. In this paper, the authors explained inverse scattering method in the microwave imaging and also present some simulation results.

Keywords: breast cancer detection, microwave imaging, tomography, tumor

Procedia PDF Downloads 396
2283 A Survey of Feature Selection and Feature Extraction Techniques in Machine Learning

Authors: Samina Khalid, Shamila Nasreen

Abstract:

Dimensionality reduction as a preprocessing step to machine learning is effective in removing irrelevant and redundant data, increasing learning accuracy, and improving result comprehensibility. However, the recent increase of dimensionality of data poses a severe challenge to many existing feature selection and feature extraction methods with respect to efficiency and effectiveness. In the field of machine learning and pattern recognition, dimensionality reduction is important area, where many approaches have been proposed. In this paper, some widely used feature selection and feature extraction techniques have analyzed with the purpose of how effectively these techniques can be used to achieve high performance of learning algorithms that ultimately improves predictive accuracy of classifier. An endeavor to analyze dimensionality reduction techniques briefly with the purpose to investigate strengths and weaknesses of some widely used dimensionality reduction methods is presented.

Keywords: age related macular degeneration, feature selection feature subset selection feature extraction/transformation, FSA’s, relief, correlation based method, PCA, ICA

Procedia PDF Downloads 475
2282 Towards a Complete Automation Feature Recognition System for Sheet Metal Manufacturing

Authors: Bahaa Eltahawy, Mikko Ylihärsilä, Reino Virrankoski, Esko Petäjä

Abstract:

Sheet metal processing is automated, but the step from product models to the production machine control still requires human intervention. This may cause time consuming bottlenecks in the production process and increase the risk of human errors. In this paper we present a system, which automatically recognizes features from the CAD-model of the sheet metal product. By using these features, the system produces a complete model of the particular sheet metal product. Then the model is used as an input for the sheet metal processing machine. Currently the system is implemented, capable to recognize more than 11 of the most common sheet metal structural features, and the procedure is fully automated. This provides remarkable savings in the production time, and protects against the human errors. This paper presents the developed system architecture, applied algorithms and system software implementation and testing.

Keywords: feature recognition, automation, sheet metal manufacturing, CAD, CAM

Procedia PDF Downloads 335
2281 A Dual-Mode Infinite Horizon Predictive Control Algorithm for Load Tracking in PUSPATI TRIGA Reactor

Authors: Mohd Sabri Minhat, Nurul Adilla Mohd Subha

Abstract:

The PUSPATI TRIGA Reactor (RTP), Malaysia reached its first criticality on June 28, 1982, with power capacity 1MW thermal. The Feedback Control Algorithm (FCA) which is conventional Proportional-Integral (PI) controller, was used for present power control method to control fission process in RTP. It is important to ensure the core power always stable and follows load tracking within acceptable steady-state error and minimum settling time to reach steady-state power. At this time, the system could be considered not well-posed with power tracking performance. However, there is still potential to improve current performance by developing next generation of a novel design nuclear core power control. In this paper, the dual-mode predictions which are proposed in modelling Optimal Model Predictive Control (OMPC), is presented in a state-space model to control the core power. The model for core power control was based on mathematical models of the reactor core, OMPC, and control rods selection algorithm. The mathematical models of the reactor core were based on neutronic models, thermal hydraulic models, and reactivity models. The dual-mode prediction in OMPC for transient and terminal modes was based on the implementation of a Linear Quadratic Regulator (LQR) in designing the core power control. The combination of dual-mode prediction and Lyapunov which deal with summations in cost function over an infinite horizon is intended to eliminate some of the fundamental weaknesses related to MPC. This paper shows the behaviour of OMPC to deal with tracking, regulation problem, disturbance rejection and caters for parameter uncertainty. The comparison of both tracking and regulating performance is analysed between the conventional controller and OMPC by numerical simulations. In conclusion, the proposed OMPC has shown significant performance in load tracking and regulating core power for nuclear reactor with guarantee stabilising in the closed-loop.

Keywords: core power control, dual-mode prediction, load tracking, optimal model predictive control

Procedia PDF Downloads 146
2280 Fault Diagnosis and Fault-Tolerant Control of Bilinear-Systems: Application to Heating, Ventilation, and Air Conditioning Systems in Multi-Zone Buildings

Authors: Abderrhamane Jarou, Dominique Sauter, Christophe Aubrun

Abstract:

Over the past decade, the growing demand for energy efficiency in buildings has attracted the attention of the control community. Failures in HVAC (heating, ventilation and air conditioning) systems in buildings can have a significant impact on the desired and expected energy performance of buildings and on the user's comfort as well. FTC is a recent technology area that studies the adaptation of control algorithms to faulty operating conditions of a system. The application of Fault-Tolerant Control (FTC) in HVAC systems has gained attention in the last two decades. The objective is to maintain the variations in system performance due to faults within an acceptable range with respect to the desired nominal behavior. This paper considers the so-called active approach, which is based on fault and identification scheme combined with a control reconfiguration algorithm that consists in determining a new set of control parameters so that the reconfigured performance is "as close as possible, "in some sense, to the nominal performance. Thermal models of buildings and their HVAC systems are described by non-linear (usually bi-linear) equations. Most of the works carried out so far in FDI (fault diagnosis and isolation) or FTC consider a linearized model of the studied system. However, this model is only valid in a reduced range of variation. This study presents a new fault diagnosis (FD) algorithm based on a bilinear observer for the detection and accurate estimation of the magnitude of the HVAC system failure. The main contribution of the proposed FD algorithm is that instead of using specific linearized models, the algorithm inherits the structure of the actual bilinear model of the building thermal dynamics. As an immediate consequence, the algorithm is applicable to a wide range of unpredictable operating conditions, i.e., weather dynamics, outdoor air temperature, zone occupancy profile. A bilinear fault detection observer is proposed for a bilinear system with unknown inputs. The residual vector in the observer design is decoupled from the unknown inputs and, under certain conditions, is made sensitive to all faults. Sufficient conditions are given for the existence of the observer and results are given for the explicit computation of observer design matrices. Dedicated observer schemes (DOS) are considered for sensor FDI while unknown input bilinear observers are considered for actuator or system components FDI. The proposed strategy for FTC works as follows: At a first level, FDI algorithms are implemented, making it also possible to estimate the magnitude of the fault. Once the fault is detected, the fault estimation is then used to feed the second level and reconfigure the control low so that that expected performances are recovered. This paper is organized as follows. A general structure for fault-tolerant control of buildings is first presented and the building model under consideration is introduced. Then, the observer-based design for Fault Diagnosis of bilinear systems is studied. The FTC approach is developed in Section IV. Finally, a simulation example is given in Section V to illustrate the proposed method.

Keywords: bilinear systems, fault diagnosis, fault-tolerant control, multi-zones building

Procedia PDF Downloads 157
2279 The Essence and Attribution of Intellectual Property Rights Generated in the Digitization of Intangible Cultural Heritage

Authors: Jiarong Zhang

Abstract:

Digitizing intangible cultural heritage is a complex and comprehensive process from which sorts of intellectual property rights may be generated. Digitizing may be a repacking process of cultural heritage, which creates copyrights; recording folk songs and indigenous performances can create 'related rights'. At the same time, digitizing intangible cultural heritage may infringe the intellectual property rights of others unintentionally. Recording religious rituals of indigenous communities without authorization can violate the moral right of the ceremony participants of the community; making digital copies of rock paintings may infringe the right of reproduction. In addition, several parties are involved in the digitization process: indigenous peoples, museums, and archives can be holders of cultural heritage; companies and research institutions can be technology providers; internet platforms can be promoters and sellers; the public and groups above can be beneficiaries. When diverse intellectual property rights versus various parties, problems and disputes can arise easily. What are the types of intellectual property rights generated in the digitization process? What is the essence of these rights? Who should these rights belong to? How to use intellectual property to protect the digitalization of cultural heritage? How to avoid infringing on the intellectual property rights of others? While the digitization has been regarded as an effective approach to preserve intangible cultural heritage, related intellectual property issues have not received the attention and full discussion. Thus, parties involving in the digitization process may face intellectual property infringement lawsuits. The article will explore those problems from the intersection perspective of intellectual property law and cultural heritage. From a comparative approach, the paper will analysis related legal documents and cases, and shed some lights of those questions listed. The findings show, although there are no intellectual property laws targeting the cultural heritage in most countries, the involved stakeholders can seek protection from existing intellectual property rights following the suggestions of the article. The research will contribute to the digitization of intangible cultural heritage from a legal and policy aspect.

Keywords: copyright, digitization, intangible cultural heritage, intellectual property, Internet platforms

Procedia PDF Downloads 127
2278 A Mathematical Equation to Calculate Stock Price of Different Growth Model

Authors: Weiping Liu

Abstract:

This paper presents an equation to calculate stock prices of different growth model. This equation is mathematically derived by using discounted cash flow method. It has the advantages of being very easy to use and very accurate. It can still be used even when the first stage is lengthy. This equation is more generalized because it can be used for all the three popular stock price models. It can be programmed into financial calculator or electronic spreadsheets. In addition, it can be extended to a multistage model. It is more versatile and efficient than the traditional methods.

Keywords: stock price, multistage model, different growth model, discounted cash flow method

Procedia PDF Downloads 388
2277 Software Component Identification from Its Object-Oriented Code: Graph Metrics Based Approach

Authors: Manel Brichni, Abdelhak-Djamel Seriai

Abstract:

Systems are increasingly complex. To reduce their complexity, an abstract view of the system can simplify its development. To overcome this problem, we propose a method to decompose systems into subsystems while reducing their coupling. These subsystems represent components. Consisting of an existing object-oriented systems, the main idea of our approach is based on modelling as graphs all entities of an oriented object source code. Such modelling is easy to handle, so we can apply restructuring algorithms based on graph metrics. The particularity of our approach consists in integrating in addition to standard metrics, such as coupling and cohesion, some graph metrics giving more precision during the components identi cation. To treat this problem, we relied on the ROMANTIC approach that proposed a component-based software architecture recovery from an object oriented system.

Keywords: software reengineering, software component and interfaces, metrics, graphs

Procedia PDF Downloads 483
2276 Channel Estimation/Equalization with Adaptive Modulation and Coding over Multipath Faded Channels for WiMAX

Authors: B. Siva Kumar Reddy, B. Lakshmi

Abstract:

WiMAX has adopted an Adaptive Modulation and Coding (AMC) in OFDM to endure higher data rates and error free transmission. AMC schemes employ the Channel State Information (CSI) to efficiently utilize the channel and maximize the throughput and for better spectral efficiency. This CSI has given to the transmitter by the channel estimators. In this paper, LSE (Least Square Error) and MMSE (Minimum Mean square Error) estimators are suggested and BER (Bit Error Rate) performance has been analyzed. Channel equalization is also integrated with with AMC-OFDM system and presented with Constant Modulus Algorithm (CMA) and Least Mean Square (LMS) algorithms with convergence rates analysis. Simulation results proved that increment in modulation scheme size causes to improvement in throughput along with BER value. There is a trade-off among modulation size, throughput, BER value and spectral efficiency. Results also reported the requirement of channel estimation and equalization in high data rate systems.

Keywords: AMC, CSI, CMA, OFDM, OFDMA, WiMAX

Procedia PDF Downloads 381
2275 A Photoredox (C)sp³-(C)sp² Coupling Method Comparison Study

Authors: Shasline Gedeon, Tiffany W. Ardley, Ying Wang, Nathan J. Gesmundo, Katarina A. Sarris, Ana L. Aguirre

Abstract:

Drug discovery and delivery involve drug targeting, an approach that helps find a drug against a chosen target through high throughput screening and other methods by way of identifying the physical properties of the potential lead compound. Physical properties of potential drug candidates have been an imperative focus since the unveiling of Lipinski's Rule of 5 for oral drugs. Throughout a compound's journey from discovery, clinical phase trials, then becoming a classified drug on the market, the desirable properties are optimized while minimizing/eliminating toxicity and undesirable properties. In the pharmaceutical industry, the ability to generate molecules in parallel with maximum efficiency is a substantial factor achieved through sp²-sp² carbon coupling reactions, e.g., Suzuki Coupling reactions. These reaction types allow for the increase of aromatic fragments onto a compound. More recent literature has found benefits to decreasing aromaticity, calling for more sp³-sp² carbon coupling reactions instead. The objective of this project is to provide a comparison between various sp³-sp² carbon coupling methods and reaction conditions, collecting data on production of the desired product. There were four different coupling methods being tested amongst three cores and 4-5 installation groups per method; each method ran under three distinct reaction conditions. The tested methods include the Photoredox Decarboxylative Coupling, the Photoredox Potassium Alkyl Trifluoroborate (BF3K) Coupling, the Photoredox Cross-Electrophile (PCE) Coupling, and the Weix Cross-Electrophile (WCE) Coupling. The results concluded that the Decarboxylative method was very difficult in yielding product despite the several literature conditions chosen. The BF3K and PCE methods produced competitive results. Amongst the two Cross-Electrophile coupling methods, the Photoredox method surpassed the Weix method on numerous accounts. The results will be used to build future libraries.

Keywords: drug discovery, high throughput chemistry, photoredox chemistry, sp³-sp² carbon coupling methods

Procedia PDF Downloads 125
2274 Use of Giant Magneto Resistance Sensors to Detect Micron to Submicron Biologic Objects

Authors: Manon Giraud, Francois-Damien Delapierre, Guenaelle Jasmin-Lebras, Cecile Feraudet-Tarisse, Stephanie Simon, Claude Fermon

Abstract:

Early diagnosis or detection of harmful substances at low level is a growing field of high interest. The ideal test should be cheap, easy to use, quick, reliable, specific, and with very low detection limit. Combining the high specificity of antibodies-functionalized magnetic beads used to immune-capture biologic objects and the high sensitivity of a GMR-based sensors, it is possible to even detect these biologic objects one by one, such as a cancerous cell, a bacteria or a disease biomarker. The simplicity of the detection process makes its use possible even for untrained staff. Giant Magneto Resistance (GMR) is a recently discovered effect consisting in the electrical resistance modification of some conductive layers when exposed to a magnetic field. This effect allows the detection of very low variations of magnetic field (typically a few tens of nanoTesla). Magnetic nanobeads coated with antibodies targeting the analytes are mixed with a biological sample (blood, saliva) and incubated for 45 min. Then the mixture is injected in a very simple microfluidic chip and circulates above a GMR sensor that detects changes in the surrounding magnetic field. Magnetic particles do not create a field sufficient to be detected. Therefore, only the biological objects surrounded by several antibodies-functionalized magnetic beads (that have been captured by the complementary antigens) are detected when they move above the sensor. Proof of concept has been carried out on NS1 mouse cancerous cells diluted in PBS which have been bonded to magnetic 200nm particles. Signals were detected in cells-containing samples while none were recorded for negative controls. Binary response was hence assessed for this first biological model. The precise quantification of the analytes and its detection in highly diluted solution is the step now in progress.

Keywords: early diagnosis, giant magnetoresistance, lab-on-a-chip, submicron particle

Procedia PDF Downloads 235
2273 Empirical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: general appliance model, non intrusive load monitoring, events detection, unsupervised techniques;

Procedia PDF Downloads 60
2272 Problems of Boolean Reasoning Based Biclustering Parallelization

Authors: Marcin Michalak

Abstract:

Biclustering is the way of two-dimensional data analysis. For several years it became possible to express such issue in terms of Boolean reasoning, for processing continuous, discrete and binary data. The mathematical backgrounds of such approach — proved ability of induction of exact and inclusion–maximal biclusters fulfilling assumed criteria — are strong advantages of the method. Unfortunately, the core of the method has quite high computational complexity. In the paper the basics of Boolean reasoning approach for biclustering are presented. In such context the problems of computation parallelization are risen.

Keywords: Boolean reasoning, biclustering, parallelization, prime implicant

Procedia PDF Downloads 110