Search results for: Principle Component Analysis (PCA)
29967 Emotion Recognition with Occlusions Based on Facial Expression Reconstruction and Weber Local Descriptor
Authors: Jadisha Cornejo, Helio Pedrini
Abstract:
Recognition of emotions based on facial expressions has received increasing attention from the scientific community over the last years. Several fields of applications can benefit from facial emotion recognition, such as behavior prediction, interpersonal relations, human-computer interactions, recommendation systems. In this work, we develop and analyze an emotion recognition framework based on facial expressions robust to occlusions through the Weber Local Descriptor (WLD). Initially, the occluded facial expressions are reconstructed following an extension approach of Robust Principal Component Analysis (RPCA). Then, WLD features are extracted from the facial expression representation, as well as Local Binary Patterns (LBP) and Histogram of Oriented Gradients (HOG). The feature vector space is reduced using Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). Finally, K-Nearest Neighbor (K-NN) and Support Vector Machine (SVM) classifiers are used to recognize the expressions. Experimental results on three public datasets demonstrated that the WLD representation achieved competitive accuracy rates for occluded and non-occluded facial expressions compared to other approaches available in the literature.Keywords: emotion recognition, facial expression, occlusion, fiducial landmarks
Procedia PDF Downloads 18329966 Site Analysis’ Importance as a Valid Factor in Building Design
Authors: Mekwa Eme, Anya chukwuma
Abstract:
The act of evaluating a particular site physically and socially in order to create a good design solution that will address the physical and interior environment of the location is known as architectural site analysis. This essay will describe site analysis as a useful design component. According to the introduction and supporting research, site evaluation and analysis are crucial to good design in terms of topography, orientation, site size, accessibility, rainfall, wind direction, and times of sunrise and sunset. Methodology: Both quantitative and qualitative analyses are used in this paper. The primary and secondary types of data collection are as follows. This information was gathered via the case study approach, already published literature, journals, the internet, a local poll, oral interviews, inquiries, and in-person interviews. The purpose of this is to clarify the benefits of site analysis for the design process and its implications for the working or building stage. Results: Each site's criteria are unique in terms of things like soil, plants, trees, accessibility, topography, and security. This will make it easier for the architect and environmentalist to decide on the idea, shape, and supporting structures of the design. It is crucial because before any design work is done, the nature of the target location will be determined through site visits and research. The location, contours, site features, and accessibility are just a few of the topics included in this site study. In order for students and working architects to understand the nature of the site they will be working on, site analysis is a key component of architectural education. The building's orientation, the site's circulation, and the sustainability of the site may all be determined with thorough research of the site's features.Keywords: analysis, climate, statistics, design
Procedia PDF Downloads 25029965 Supervised-Component-Based Generalised Linear Regression with Multiple Explanatory Blocks: THEME-SCGLR
Authors: Bry X., Trottier C., Mortier F., Cornu G., Verron T.
Abstract:
We address component-based regularization of a Multivariate Generalized Linear Model (MGLM). A set of random responses Y is assumed to depend, through a GLM, on a set X of explanatory variables, as well as on a set T of additional covariates. X is partitioned into R conceptually homogeneous blocks X1, ... , XR , viewed as explanatory themes. Variables in each Xr are assumed many and redundant. Thus, Generalised Linear Regression (GLR) demands regularization with respect to each Xr. By contrast, variables in T are assumed selected so as to demand no regularization. Regularization is performed searching each Xr for an appropriate number of orthogonal components that both contribute to model Y and capture relevant structural information in Xr. We propose a very general criterion to measure structural relevance (SR) of a component in a block, and show how to take SR into account within a Fisher-scoring-type algorithm in order to estimate the model. We show how to deal with mixed-type explanatory variables. The method, named THEME-SCGLR, is tested on simulated data.Keywords: Component-Model, Fisher Scoring Algorithm, GLM, PLS Regression, SCGLR, SEER, THEME
Procedia PDF Downloads 39729964 A Stable Method for Determination of the Number of Independent Components
Authors: Yuyan Yi, Jingyi Zheng, Nedret Billor
Abstract:
Independent component analysis (ICA) is one of the most commonly used blind source separation (BSS) techniques for signal pre-processing, such as noise reduction and feature extraction. The main parameter in the ICA method is the number of independent components (IC). Although there have been several methods for the determination of the number of ICs, it has not been given sufficient attentionto this important parameter. In this study, wereview the mostused methods fordetermining the number of ICs and providetheir advantages and disadvantages. Further, wepropose an improved version of column-wise ICAByBlock method for the determination of the number of ICs.To assess the performance of the proposed method, we compare the column-wise ICAbyBlock with several existing methods through different ICA methods by using simulated and real signal data. Results show that the proposed column-wise ICAbyBlock is an effective and stable method for determining the optimal number of components in ICA. This method is simple, and results can be demonstrated intuitively with good visualizations.Keywords: independent component analysis, optimal number, column-wise, correlation coefficient, cross-validation, ICAByblock
Procedia PDF Downloads 9929963 Revealing the Structural and Dynamic Properties of Betaine Aldehyde Dehydrogenase 2 from Rice (Oryza sativa): Simulation Studies
Authors: Apisaraporn Baicharoen, Prapasiri Pongprayoon
Abstract:
Betaine aldehyde dehydrogenase 2 (BADH2) is an enzyme that inhibits the accumulation of 2-acetyl-1-pyrroline (2AP), a potent flavor compound in rice fragrance. BADH2 contains three domains (NAD-binding, substrate-binding, and oligomerization domains). It catalyzes the oxidation of amino aldehydes. The lack of BADH2 results in the formation of 2AP and consequently an increase in rice fragrance. To date, inadequate data on BADH2 structure and function are available. An insight into the nature of BADH2 can serve as one of key starting points for the production of high quality fragrant rice. In this study, we therefore constructed the homology model of BADH2 and employed 500-ns Molecular Dynamics simulations (MD) to primarily understand the structural and dynamic properties of BADH2. Initially, Ramachandran plot confirms the good quality of modeled protein structure. Principle Component Analysis (PCA) was also calculated to capture the protein dynamics. Among 3 domains, the results show that NAD binding site is found to be more flexible. Moreover, interactions from key amino acids (N162, E260, C294, and Y419) that are crucial for function are investigated.Keywords: betaine aldehyde dehydrogenase 2, fragrant rice, homology modeling, molecular dynamics simulations
Procedia PDF Downloads 21629962 Assessment of Soil Quality Indicators in Rice Soil of Tamil Nadu
Authors: Kaleeswari R. K., Seevagan L .
Abstract:
Soil quality in an agroecosystem is influenced by the cropping system, water and soil fertility management. A valid soil quality index would help to assess the soil and crop management practices for desired productivity and soil health. The soil quality indices also provide an early indication of soil degradation and needy remedial and rehabilitation measures. Imbalanced fertilization and inadequate organic carbon dynamics deteriorate soil quality in an intensive cropping system. The rice soil ecosystem is different from other arable systems since rice is grown under submergence, which requires a different set of key soil attributes for enhancing soil quality and productivity. Assessment of the soil quality index involves indicator selection, indicator scoring and comprehensive score into one index. The most appropriate indicator to evaluate soil quality can be selected by establishing the minimum data set, which can be screened by linear and multiple regression factor analysis and score function. This investigation was carried out in intensive rice cultivating regions (having >1.0 lakh hectares) of Tamil Nadu viz., Thanjavur, Thiruvarur, Nagapattinam, Villupuram, Thiruvannamalai, Cuddalore and Ramanathapuram districts. In each district, intensive rice growing block was identified. In each block, two sampling grids (10 x 10 sq.km) were used with a sampling depth of 10 – 15 cm. Using GIS coordinates, and soil sampling was carried out at various locations in the study area. The number of soil sampling points were 41, 28, 28, 32, 37, 29 and 29 in Thanjavur, Thiruvarur, Nagapattinam, Cuddalore, Villupuram, Thiruvannamalai and Ramanathapuram districts, respectively. Principal Component Analysis is a data reduction tool to select some of the potential indicators. Principal Component is a linear combination of different variables that represents the maximum variance of the dataset. Principal Component that has eigenvalues equal or higher than 1.0 was taken as the minimum data set. Principal Component Analysis was used to select the representative soil quality indicators in rice soils based on factor loading values and contribution percent values. Variables having significant differences within the production system were used for the preparation of the minimum data set. Each Principal Component explained a certain amount of variation (%) in the total dataset. This percentage provided the weight for variables. The final Principal Component Analysis based soil quality equation is SQI = ∑ i=1 (W ᵢ x S ᵢ); where S- score for the subscripted variable; W-weighing factor derived from PCA. Higher index scores meant better soil quality. Soil respiration, Soil available Nitrogen and Potentially Mineralizable Nitrogen were assessed as soil quality indicators in rice soil of the Cauvery Delta zone covering Thanjavur, Thiruvavur and Nagapattinam districts. Soil available phosphorus could be used as a soil quality indicator of rice soils in the Cuddalore district. In rain-fed rice ecosystems of coastal sandy soil, DTPA – Zn could be used as an effective soil quality indicator. Among the soil parameters selected from Principal Component Analysis, Microbial Biomass Nitrogen could be used quality indicator for rice soils of the Villupuram district. Cauvery Delta zone has better SQI as compared with other intensive rice growing zone of Tamil Nadu.Keywords: soil quality index, soil attributes, soil mapping, and rice soil
Procedia PDF Downloads 8729961 Bi-Component Particle Segregation Studies in a Spiral Concentrator Using Experimental and CFD Techniques
Authors: Prudhvinath Reddy Ankireddy, Narasimha Mangadoddy
Abstract:
Spiral concentrators are commonly used in various industries, including mineral and coal processing, to efficiently separate materials based on their density and size. In these concentrators, a mixture of solid particles and fluid (usually water) is introduced as feed at the top of a spiral channel. As the mixture flows down the spiral, centrifugal and gravitational forces act on the particles, causing them to stratify based on their density and size. Spiral flows exhibit complex fluid dynamics, and interactions involve multiple phases and components in the process. Understanding the behavior of these phases within the spiral concentrator is crucial for achieving efficient separation. An experimental bi-component particle interaction study is conducted in this work utilizing magnetite (heavier density) and silica (lighter density) with different proportions processed in the spiral concentrator. The observation separation reveals that denser particles accumulate towards the inner region of the spiral trough, while a significant concentration of lighter particles are found close to the outer edge. The 5th turn of the spiral trough is partitioned into five zones to achieve a comprehensive distribution analysis of bicomponent particle segregation. Samples are then gathered from these individual streams using an in-house sample collector, and subsequent analysis is conducted to assess component segregation. Along the trough, there was a decline in the concentration of coarser particles, accompanied by an increase in the concentration of lighter particles. The segregation pattern indicates that the heavier coarse component accumulates in the inner zone, whereas the lighter fine component collects in the outer zone. The middle zone primarily consists of heavier fine particles and lighter coarse particles. The zone-wise results reveal that there is a significant fraction of segregation occurs in inner and middle zones. Finer magnetite and silica particles predominantly accumulate in outer zones with the smallest fraction of segregation. Additionally, numerical simulations are also carried out using the computational fluid dynamics (CFD) model based on the volume of fluid (VOF) approach incorporating the RSM turbulence model. The discrete phase model (DPM) is employed for particle tracking, thereby understanding the particle segregation of magnetite and silica along the spiral trough.Keywords: spiral concentrator, bi-component particle segregation, computational fluid dynamics, discrete phase model
Procedia PDF Downloads 6829960 Parametric Appraisal of Robotic Arc Welding of Mild Steel Material by Principal Component Analysis-Fuzzy with Taguchi Technique
Authors: Amruta Rout, Golak Bihari Mahanta, Gunji Bala Murali, Bibhuti Bhusan Biswal, B. B. V. L. Deepak
Abstract:
The use of industrial robots for performing welding operation is one of the chief sign of contemporary welding in these days. The weld joint parameter and weld process parameter modeling is one of the most crucial aspects of robotic welding. As weld process parameters affect the weld joint parameters differently, a multi-objective optimization technique has to be utilized to obtain optimal setting of weld process parameter. In this paper, a hybrid optimization technique, i.e., Principal Component Analysis (PCA) combined with fuzzy logic has been proposed to get optimal setting of weld process parameters like wire feed rate, welding current. Gas flow rate, welding speed and nozzle tip to plate distance. The weld joint parameters considered for optimization are the depth of penetration, yield strength, and ultimate strength. PCA is a very efficient multi-objective technique for converting the correlated and dependent parameters into uncorrelated and independent variables like the weld joint parameters. Also in this approach, no need for checking the correlation among responses as no individual weight has been assigned to responses. Fuzzy Inference Engine can efficiently consider these aspects into an internal hierarchy of it thereby overcoming various limitations of existing optimization approaches. At last Taguchi method is used to get the optimal setting of weld process parameters. Therefore, it has been concluded the hybrid technique has its own advantages which can be used for quality improvement in industrial applications.Keywords: robotic arc welding, weld process parameters, weld joint parameters, principal component analysis, fuzzy logic, Taguchi method
Procedia PDF Downloads 18029959 Determining a Suitable Maintenance Measure for Gentelligent Components Using Case-Based Reasoning
Authors: Maximilian Winkens, Peter Nyhuis
Abstract:
Components with sensory properties such as gentelligent components developed at the Collaborative Research Center 653 offer a new angle on the full utilization of the remaining service life in case of a preventive maintenance. The developed methodology of component status driven maintenance analyses the stress data obtained during the component's useful life and on the basis of this knowledge assesses the type of maintenance called for in this case. The procedure is derived from the case-based reasoning method and will be elucidated in detail. The method's functionality is demonstrated with real-life data obtained during test runs of a racing car prototype.Keywords: gentelligent component, preventive maintenance, case-based reasoning, sensory
Procedia PDF Downloads 36429958 Analysis of Building Response from Vertical Ground Motions
Authors: George C. Yao, Chao-Yu Tu, Wei-Chung Chen, Fung-Wen Kuo, Yu-Shan Chang
Abstract:
Building structures are subjected to both horizontal and vertical ground motions during earthquakes, but only the horizontal ground motion has been extensively studied and considered in design. Most of the prevailing seismic codes assume the vertical component to be 1/2 to 2/3 of the horizontal one. In order to understand the building responses from vertical ground motions, many earthquakes records are studied in this paper. System identification methods (ARX Model) are used to analyze the strong motions and to find out the characteristics of the vertical amplification factors and the natural frequencies of buildings. Analysis results show that the vertical amplification factors for high-rise buildings and low-rise building are 1.78 and 2.52 respectively, and the average vertical amplification factor of all buildings is about 2. The relationship between the vertical natural frequency and building height was regressed to a suggested formula in this study. The result points out an important message; the taller the building is, the greater chance of resonance of vertical vibration on the building will be.Keywords: vertical ground motion, vertical amplification factor, natural frequency, component
Procedia PDF Downloads 31529957 Data Management System for Environmental Remediation
Authors: Elizaveta Petelina, Anton Sizo
Abstract:
Environmental remediation projects deal with a wide spectrum of data, including data collected during site assessment, execution of remediation activities, and environmental monitoring. Therefore, an appropriate data management is required as a key factor for well-grounded decision making. The Environmental Data Management System (EDMS) was developed to address all necessary data management aspects, including efficient data handling and data interoperability, access to historical and current data, spatial and temporal analysis, 2D and 3D data visualization, mapping, and data sharing. The system focuses on support of well-grounded decision making in relation to required mitigation measures and assessment of remediation success. The EDMS is a combination of enterprise and desktop level data management and Geographic Information System (GIS) tools assembled to assist to environmental remediation, project planning, and evaluation, and environmental monitoring of mine sites. EDMS consists of seven main components: a Geodatabase that contains spatial database to store and query spatially distributed data; a GIS and Web GIS component that combines desktop and server-based GIS solutions; a Field Data Collection component that contains tools for field work; a Quality Assurance (QA)/Quality Control (QC) component that combines operational procedures for QA and measures for QC; Data Import and Export component that includes tools and templates to support project data flow; a Lab Data component that provides connection between EDMS and laboratory information management systems; and a Reporting component that includes server-based services for real-time report generation. The EDMS has been successfully implemented for the Project CLEANS (Clean-up of Abandoned Northern Mines). Project CLEANS is a multi-year, multimillion-dollar project aimed at assessing and reclaiming 37 uranium mine sites in northern Saskatchewan, Canada. The EDMS has effectively facilitated integrated decision-making for CLEANS project managers and transparency amongst stakeholders.Keywords: data management, environmental remediation, geographic information system, GIS, decision making
Procedia PDF Downloads 16329956 Image Segmentation of Visual Markers in Robotic Tracking System Based on Differential Evolution Algorithm with Connected-Component Labeling
Authors: Shu-Yu Hsu, Chen-Chien Hsu, Wei-Yen Wang
Abstract:
Color segmentation is a basic and simple way for recognizing the visual markers in a robotic tracking system. In this paper, we propose a new method for color segmentation by incorporating differential evolution algorithm and connected component labeling to autonomously preset the HSV threshold of visual markers. To evaluate the effectiveness of the proposed algorithm, a ROBOTIS OP2 humanoid robot is used to conduct the experiment, where five most commonly used color including red, purple, blue, yellow, and green in visual markers are given for comparisons.Keywords: color segmentation, differential evolution, connected component labeling, humanoid robot
Procedia PDF Downloads 60529955 Exponential Spline Solution for Singularly Perturbed Boundary Value Problems with an Uncertain-But-Bounded Parameter
Authors: Waheed Zahra, Mohamed El-Beltagy, Ashraf El Mhlawy, Reda Elkhadrawy
Abstract:
In this paper, we consider singular perturbation reaction-diffusion boundary value problems, which contain a small uncertain perturbation parameter. To solve these problems, we propose a numerical method which is based on an exponential spline and Shishkin mesh discretization. While interval analysis principle is used to deal with the uncertain parameter, sensitivity analysis has been conducted using different methods. Numerical results are provided to show the applicability and efficiency of our method, which is ε-uniform convergence of almost second order.Keywords: singular perturbation problem, shishkin mesh, two small parameters, exponential spline, interval analysis, sensitivity analysis
Procedia PDF Downloads 27529954 Evaluation of Real-Time Background Subtraction Technique for Moving Object Detection Using Fast-Independent Component Analysis
Authors: Naoum Abderrahmane, Boumehed Meriem, Alshaqaqi Belal
Abstract:
Background subtraction algorithm is a larger used technique for detecting moving objects in video surveillance to extract the foreground objects from a reference background image. There are many challenges to test a good background subtraction algorithm, like changes in illumination, dynamic background such as swinging leaves, rain, snow, and the changes in the background, for example, moving and stopping of vehicles. In this paper, we propose an efficient and accurate background subtraction method for moving object detection in video surveillance. The main idea is to use a developed fast-independent component analysis (ICA) algorithm to separate background, noise, and foreground masks from an image sequence in practical environments. The fast-ICA algorithm is adapted and adjusted with a matrix calculation and searching for an optimum non-quadratic function to be faster and more robust. Moreover, in order to estimate the de-mixing matrix and the denoising de-mixing matrix parameters, we propose to convert all images to YCrCb color space, where the luma component Y (brightness of the color) gives suitable results. The proposed technique has been verified on the publicly available datasets CD net 2012 and CD net 2014, and experimental results show that our algorithm can detect competently and accurately moving objects in challenging conditions compared to other methods in the literature in terms of quantitative and qualitative evaluations with real-time frame rate.Keywords: background subtraction, moving object detection, fast-ICA, de-mixing matrix
Procedia PDF Downloads 9729953 Improved Pattern Matching Applied to Surface Mounting Devices Components Localization on Automated Optical Inspection
Authors: Pedro M. A. Vitoriano, Tito. G. Amaral
Abstract:
Automated Optical Inspection (AOI) Systems are commonly used on Printed Circuit Boards (PCB) manufacturing. The use of this technology has been proven as highly efficient for process improvements and quality achievements. The correct extraction of the component for posterior analysis is a critical step of the AOI process. Nowadays, the Pattern Matching Algorithm is commonly used, although this algorithm requires extensive calculations and is time consuming. This paper will present an improved algorithm for the component localization process, with the capability of implementation in a parallel execution system.Keywords: AOI, automated optical inspection, SMD, surface mounting devices, pattern matching, parallel execution
Procedia PDF Downloads 30029952 Concept, Design and Implementation of Power System Component Simulator Based on Thyristor Controlled Transformer and Power Converter
Authors: B. Kędra, R. Małkowski
Abstract:
This paper presents information on Power System Component Simulator – a device designed for LINTE^2 laboratory owned by Gdansk University of Technology in Poland. In this paper, we first provide an introductory information on the Power System Component Simulator and its capabilities. Then, the concept of the unit is presented. Requirements for the unit are described as well as proposed and introduced functions are listed. Implementation details are given. Hardware structure is presented and described. Information about used communication interface, data maintenance and storage solution, as well as used Simulink real-time features are presented. List and description of all measurements is provided. Potential of laboratory setup modifications is evaluated. Lastly, the results of experiments performed using Power System Component Simulator are presented. This includes simulation of under frequency load shedding, frequency and voltage dependent characteristics of groups of load units, time characteristics of group of different load units in a chosen area.Keywords: power converter, Simulink Real-Time, Matlab, load, tap controller
Procedia PDF Downloads 24229951 Evidence-Based Investigation of the Phonology of Nigerian Instant Messaging
Authors: Emmanuel Uba, Lily Chimuanya, Maryam Tar
Abstract:
Orthographic engineering is no longer the preserve of the Short Messaging Service (SMS), which is characterised by limited space. Such stylistic creativity or deviation is fast creeping into real-time messaging, popularly known as Instant Messaging (IM), despite the large number of characters allowed. This occurs at various linguistic levels: phonology, morphology, syntax, etc. Nigerians are not immune to this linguistic stylisation. This study investigates the phonological and meta-phonological conventions of the messages sent and received via WhatsApp by Nigerian graduates. This is ontological study of 250 instant messages collected from 98 graduates from different ethnic groups in Nigeria. The selection and analysis of the messages are based on figure and ground principle. The results reveal the use of accent stylisation, phoneme substitution, blending, consonantisation (a specialised form of deletion targeting vowels), numerophony (using a figure/number, usually 1-10, to represent a word or syllable that has the same sound) and phonetic respelling in the IMs sent by Nigerians. The study confirms the existence of linguistic creativity.Keywords: figure and ground principle, instant messaging, linguistic stylisation, meta-phonology
Procedia PDF Downloads 39729950 Vibration Analysis of Power Lines with Moving Dampers
Authors: Mohammad Bukhari, Oumar Barry
Abstract:
In order to reduce the Aeolian vibration of overhead transmission lines, the Stockbridge damper is usually attached. The efficiency of Stockbridge damper depends on its location on the conductor and its resonant frequencies. When the Stockbridge damper is located on a vibration node, it becomes inefficient. Hence, the static damper should be subrogated by a dynamic one. In the present study, a proposed dynamic absorber for transmission lines is studied. Hamilton’s principle is used to derive the governing equations, then the system of ordinary differential equations is solved numerically. Parametric studies are conducted to determine how certain parameters affect the performance of the absorber. The results demonstrate that replacing the static absorber by a dynamic one enhance the absorber performance for wider range of frequencies. The results also indicate that the maximum displacement decreases as the absorber speed and the forcing frequency increase. However, this reduction in maximum displacement is accompanying with increasing in the steady state vibration displacement. It is also indicated that the energy dissipation in moving absorber covers higher range of frequencies.Keywords: absorber performance, Aeolian vibration, Hamilton’s principle, stockbridge damper
Procedia PDF Downloads 27029949 Combining Chiller and Variable Frequency Drives
Authors: Nasir Khalid, S. Thirumalaichelvam
Abstract:
In most buildings, according to US Department of Energy Data Book, the electrical consumption attributable to centralized heating and ventilation of air- condition (HVAC) component can be as high as 40-60% of the total electricity consumption for an entire building. To provide efficient energy management for the market today, researchers are finding new ways to develop a system that can save electrical consumption of buildings even more. In this concept paper, a system known as Intelligent Chiller Energy Efficiency (iCEE) System is being developed that is capable of saving up to 25% from the chiller’s existing electrical energy consumption. In variable frequency drives (VFDs), research has found significant savings up to 30% of electrical energy consumption. Together with the VFDs at specific Air Handling Unit (AHU) of HVAC component, this system will save even more electrical energy consumption. The iCEE System is compatible with any make, model or age of centrifugal, rotary or reciprocating chiller air-conditioning systems which are electrically driven. The iCEE system uses engineering principles of efficiency analysis, enthalpy analysis, heat transfer, mathematical prediction, modified genetic algorithm, psychometrics analysis, and optimization formulation to achieve true and tangible energy savings for consumers.Keywords: variable frequency drives, adjustable speed drives, ac drives, chiller energy system
Procedia PDF Downloads 55829948 A Physical Theory of Information vs. a Mathematical Theory of Communication
Authors: Manouchehr Amiri
Abstract:
This article introduces a general notion of physical bit information that is compatible with the basics of quantum mechanics and incorporates the Shannon entropy as a special case. This notion of physical information leads to the Binary data matrix model (BDM), which predicts the basic results of quantum mechanics, general relativity, and black hole thermodynamics. The compatibility of the model with holographic, information conservation, and Landauer’s principles are investigated. After deriving the “Bit Information principle” as a consequence of BDM, the fundamental equations of Planck, De Broglie, Beckenstein, and mass-energy equivalence are derived.Keywords: physical theory of information, binary data matrix model, Shannon information theory, bit information principle
Procedia PDF Downloads 17429947 A Rationale to Describe Ambident Reactivity
Authors: David Ryan, Martin Breugst, Turlough Downes, Peter A. Byrne, Gerard P. McGlacken
Abstract:
An ambident nucleophile is a nucleophile that possesses two or more distinct nucleophilic sites that are linked through resonance and are effectively “in competition” for reaction with an electrophile. Examples include enolates, pyridone anions, and nitrite anions, among many others. Reactions of ambident nucleophiles and electrophiles are extremely prevalent at all levels of organic synthesis. The principle of hard and soft acids and bases (the “HSAB principle”) is most commonly cited in the explanation of selectivities in such reactions. Although this rationale is pervasive in any discussion on ambident reactivity, the HSAB principle has received considerable criticism. As a result, the principle’s supplantation has become an area of active interest in recent years. This project focuses on developing a model for rationalizing ambident reactivity. Presented here is an approach that incorporates computational calculations and experimental kinetic data to construct Gibbs energy profile diagrams. The preferred site of alkylation of nitrite anion with a range of ‘hard’ and ‘soft’ alkylating agents was established by ¹H NMR spectroscopy. Pseudo-first-order rate constants were measured directly by ¹H NMR reaction monitoring, and the corresponding second-order constants and Gibbs energies of activation were derived. These, in combination with computationally derived standard Gibbs energies of reaction, were sufficient to construct Gibbs energy wells. By representing the ambident system as a series of overlapping Gibbs energy wells, a more intuitive picture of ambident reactivity emerges. Here, previously unexplained switches in reactivity in reactions involving closely related electrophiles are elucidated.Keywords: ambident, Gibbs, nucleophile, rates
Procedia PDF Downloads 8629946 Simulation and Experimental Study on Dual Dense Medium Fluidization Features of Air Dense Medium Fluidized Bed
Authors: Cheng Sheng, Yuemin Zhao, Chenlong Duan
Abstract:
Air dense medium fluidized bed is a typical application of fluidization techniques for coal particle separation in arid areas, where it is costly to implement wet coal preparation technologies. In the last three decades, air dense medium fluidized bed, as an efficient dry coal separation technique, has been studied in many aspects, including energy and mass transfer, hydrodynamics, bubbling behaviors, etc. Despite numerous researches have been published, the fluidization features, especially dual dense medium fluidization features have been rarely reported. In dual dense medium fluidized beds, different combinations of different dense mediums play a significant role in fluidization quality variation, thus influencing coal separation efficiency. Moreover, to what extent different dense mediums mix and to what extent the two-component particulate mixture affects the fluidization performance and quality have been in suspense. The proposed work attempts to reveal underlying mechanisms of generation and evolution of two-component particulate mixture in the fluidization process. Based on computational fluid dynamics methods and discrete particle modelling, movement and evolution of dual dense mediums in air dense medium fluidized bed have been simulated. Dual dense medium fluidization experiments have been conducted. Electrical capacitance tomography was employed to investigate the distribution of two-component mixture in experiments. Underlying mechanisms involving two-component particulate fluidization are projected to be demonstrated with the analysis and comparison of simulation and experimental results.Keywords: air dense medium fluidized bed, particle separation, computational fluid dynamics, discrete particle modelling
Procedia PDF Downloads 38329945 Principal Component Analysis Combined Machine Learning Techniques on Pharmaceutical Samples by Laser Induced Breakdown Spectroscopy
Authors: Kemal Efe Eseller, Göktuğ Yazici
Abstract:
Laser-induced breakdown spectroscopy (LIBS) is a rapid optical atomic emission spectroscopy which is used for material identification and analysis with the advantages of in-situ analysis, elimination of intensive sample preparation, and micro-destructive properties for the material to be tested. LIBS delivers short pulses of laser beams onto the material in order to create plasma by excitation of the material to a certain threshold. The plasma characteristics, which consist of wavelength value and intensity amplitude, depends on the material and the experiment’s environment. In the present work, medicine samples’ spectrum profiles were obtained via LIBS. Medicine samples’ datasets include two different concentrations for both paracetamol based medicines, namely Aferin and Parafon. The spectrum data of the samples were preprocessed via filling outliers based on quartiles, smoothing spectra to eliminate noise and normalizing both wavelength and intensity axis. Statistical information was obtained and principal component analysis (PCA) was incorporated to both the preprocessed and raw datasets. The machine learning models were set based on two different train-test splits, which were 70% training – 30% test and 80% training – 20% test. Cross-validation was preferred to protect the models against overfitting; thus the sample amount is small. The machine learning results of preprocessed and raw datasets were subjected to comparison for both splits. This is the first time that all supervised machine learning classification algorithms; consisting of Decision Trees, Discriminant, naïve Bayes, Support Vector Machines (SVM), k-NN(k-Nearest Neighbor) Ensemble Learning and Neural Network algorithms; were incorporated to LIBS data of paracetamol based pharmaceutical samples, and their different concentrations on preprocessed and raw dataset in order to observe the effect of preprocessing.Keywords: machine learning, laser-induced breakdown spectroscopy, medicines, principal component analysis, preprocessing
Procedia PDF Downloads 8829944 Semigroups of Linear Transformations with Fixed Subspaces: Green’s Relations and Ideals
Authors: Yanisa Chaiya, Jintana Sanwong
Abstract:
Let V be a vector space over a field and W a subspace of V. Let Fix(V,W) denote the set of all linear transformations on V with fix all elements in W. In this paper, we show that Fix(V,W) is a semigroup under the composition of maps and describe Green’s relations on this semigroup in terms of images, kernels and the dimensions of subspaces of the quotient space V/W where V/W = {v+W : v is an element in V} with v+W = {v+w : w is an element in W}. Let dim(U) denote the dimension of a vector space U and Vα = {vα : v is an element in V} where vα is an image of v under a linear transformation α. For any cardinal number a let a'= min{b : b > a}. We also show that the ideals of Fix(V,W) are precisely the sets. Fix(r) ={α ∊ Fix(V,W) : dim(Vα/W) < r} where 1 ≤ r ≤ a' and a = dim(V/W). Moreover, we prove that if V is a finite-dimensional vector space, then every ideal of Fix(V,W) is principle.Keywords: Green’s relations, ideals, linear transformation semi-groups, principle ideals
Procedia PDF Downloads 29329943 Reduction of Process of Evidence in Specific Forms of Criminal Proceeding: Problems and Risks
Authors: Filip Ščerba, Veronika Pochylá
Abstract:
Performing of the acts within criminal proceedings usually takes too long and thus this phenomenon can be regarded as one of the most burning problems which have plagued the criminal justice not only in the Czech Republic but at least all over Europe for the last few decades. This problem obviously has to be dealt with and thus the need to tackle this issue has resulted in the trend which is sometimes called Criminal Justice Rationalization, i.e. introducing and enforcing methods supporting the increase in efficiency of the criminal justice in order to make the criminal proceedings shorter and administrative procedure easier. This resulted in the introduction of institutes such as e.g. diversions in criminal proceedings or other forms of shortened pre-trial proceedings, which may be used primarily for dealing with less serious crimes. But also the institute, which was originally mentioned in connection with the system of criminal law in the countries belonging to the Anglo-Saxon legal order where it is frequently called of plea bargaining, has been introduced into the criminal law of many European countries, and it may be applied also in cases of serious crimes. All these special and shortened forms of criminal proceedings are connected with limited extent of process of evidence; in fact, some of these specific forms of criminal proceedings are designed for the purpose to simplify the process of evidence. That is also the reason, why some of these procedures are conditioned with the defendant’s confession. Main hypothesis: Limited process of evidence represents also a potential conflict with certain fundamental principles upon which the criminal proceeding in the Continental legal system is based. (A conflict with principle of material truth may be considered as the most important problem. This principle states that the bodies in criminal proceedings must clarify the facts of the case beyond reasonable doubt to such extent that a decision can be made; the defendant’s confession does not mean that these bodies are freed from the duty to review all the circumstances and facts of the case. Such principle is typical for criminal law in Central European region.) Basic methodologies: The paper is going to analyze such a problem of weakening of the principle of material truth in modern criminal law. Such analysis will be provided primarily on the base of the Czech criminal law, but also other legal regulations will be taken into consideration, and its result may have some relevance for all legal regulations belonging to the Continental legal system, so the paper offers also a comparison with legal systems of other Central European countries.Keywords: burden of proof, central European countries, criminal justice rationalization, criminal proceeding, Czech legislation, Czech republic, defendant, diversions, evidence, fundamental principles, plea bargaining, pre-trial proceedings, principle of material truth, process of evidence, process of evidence
Procedia PDF Downloads 28729942 Comparison of Various Policies under Different Maintenance Strategies on a Multi-Component System
Authors: Demet Ozgur-Unluakin, Busenur Turkali, Ayse Karacaorenli
Abstract:
Maintenance strategies can be classified into two types, which are reactive and proactive, with respect to the time of the failure and maintenance. If the maintenance activity is done after a breakdown, it is called reactive maintenance. On the other hand, proactive maintenance, which is further divided as preventive and predictive, focuses on maintaining components before a failure occurs to prevent expensive halts. Recently, the number of interacting components in a system has increased rapidly and therefore, the structure of the systems have become more complex. This situation has made it difficult to provide the right maintenance decisions. Herewith, determining effective decisions has played a significant role. In multi-component systems, many methodologies and strategies can be applied when a component or a system has already broken down or when it is desired to identify and avoid proactively defects that could lead to future failure. This study focuses on the comparison of various maintenance strategies on a multi-component dynamic system. Components in the system are hidden, although there exists partial observability to the decision maker and they deteriorate in time. Several predefined policies under corrective, preventive and predictive maintenance strategies are considered to minimize the total maintenance cost in a planning horizon. The policies are simulated via Dynamic Bayesian Networks on a multi-component system with different policy parameters and cost scenarios, and their performances are evaluated. Results show that when the difference between the corrective and proactive maintenance cost is low, none of the proactive maintenance policies is significantly better than the corrective maintenance. However, when the difference is increased, at least one policy parameter for each proactive maintenance strategy gives significantly lower cost than the corrective maintenance.Keywords: decision making, dynamic Bayesian networks, maintenance, multi-component systems, reliability
Procedia PDF Downloads 13129941 Thermal Hydraulic Analysis of Sub-Channels of Pressurized Water Reactors with Hexagonal Array: A Numerical Approach
Authors: Md. Asif Ullah, M. A. R. Sarkar
Abstract:
This paper illustrates 2-D and 3-D simulations of sub-channels of a Pressurized Water Reactor (PWR) having hexagonal array of fuel rods. At a steady state, the temperature of outer surface of the cladding of fuel rod is kept about 1200°C. The temperature of this isothermal surface is taken as boundary condition for simulation. Water with temperature of 290°C is given as a coolant inlet to the primary water circuit which is pressurized upto 157 bar. Turbulent flow of pressurized water is used for heat removal. In 2-D model, temperature, velocity, pressure and Nusselt number distributions are simulated in a vertical sectional plane through the sub-channels of a hexagonal fuel rod assembly. Temperature, Nusselt number and Y-component of convective heat flux along a line in this plane near the end of fuel rods are plotted for different Reynold’s number. A comparison between X-component and Y-component of convective heat flux in this vertical plane is analyzed. Hexagonal fuel rod assembly has three types of sub-channels according to geometrical shape whose boundary conditions are different too. In 3-D model, temperature, velocity, pressure, Nusselt number, total heat flux magnitude distributions for all the three sub-channels are studied for a suitable Reynold’s number. A horizontal sectional plane is taken from each of the three sub-channels to study temperature, velocity, pressure, Nusselt number and convective heat flux distribution in it. Greater values of temperature, Nusselt number and Y-component of convective heat flux are found for greater Reynold’s number. X-component of convective heat flux is found to be non-zero near the bottom of fuel rod and zero near the end of fuel rod. This indicates that the convective heat transfer occurs totally along the direction of flow near the outlet. As, length to radius ratio of sub-channels is very high, simulation for a short length of the sub-channels are done for graphical interface advantage. For the simulations, Turbulent Flow (K-Є ) module and Heat Transfer in Fluids (ht) module of COMSOL MULTIPHYSICS 5.0 are used.Keywords: sub-channels, Reynold’s number, Nusselt number, convective heat transfer
Procedia PDF Downloads 36129940 Predictive Analysis for Big Data: Extension of Classification and Regression Trees Algorithm
Authors: Ameur Abdelkader, Abed Bouarfa Hafida
Abstract:
Since its inception, predictive analysis has revolutionized the IT industry through its robustness and decision-making facilities. It involves the application of a set of data processing techniques and algorithms in order to create predictive models. Its principle is based on finding relationships between explanatory variables and the predicted variables. Past occurrences are exploited to predict and to derive the unknown outcome. With the advent of big data, many studies have suggested the use of predictive analytics in order to process and analyze big data. Nevertheless, they have been curbed by the limits of classical methods of predictive analysis in case of a large amount of data. In fact, because of their volumes, their nature (semi or unstructured) and their variety, it is impossible to analyze efficiently big data via classical methods of predictive analysis. The authors attribute this weakness to the fact that predictive analysis algorithms do not allow the parallelization and distribution of calculation. In this paper, we propose to extend the predictive analysis algorithm, Classification And Regression Trees (CART), in order to adapt it for big data analysis. The major changes of this algorithm are presented and then a version of the extended algorithm is defined in order to make it applicable for a huge quantity of data.Keywords: predictive analysis, big data, predictive analysis algorithms, CART algorithm
Procedia PDF Downloads 14229939 Rethinking: Training Needs of Secondary School Teachers in Pakistan
Authors: Sidra Rizwan
Abstract:
The article focuses on the training needs of secondary school teachers related to the knowledge component of instructional planning and strategies as stated in the National professional standards for teachers in Pakistan. The study aimed to determine the training needs of secondary school teachers on different aspects of knowledge & understanding component of instructional planning and strategies. The target population of the study was the secondary school teachers across Pakistan. For this purpose, a sample of 400 secondary school teachers was selected through multistage sampling from all the four provinces and Federal capital area. Survey method was adopted to assess the training needs by using a self reporting tool. The tool helped to gauge the training needs through indirect inventory questions as well as a ranking list in which the respondents themselves prioritized their training areas. The results showed variation between the direct and indirect reporting of the teachers on the basis of which it was concluded that the secondary school teachers needed awareness about the knowledge component of instructional planning and strategies in order to redefine their actual training needs. The researcher further identified the training needs of secondary school teachers within each province and Islamabad capital territory; including an analysis of variations between strata. As teachers are considered agents of change, their training according to the professional standards should provide a solid base for “rethinking education”.Keywords: training needs, secondary school teachers, instructional planning & strategies, knowledge & understanding
Procedia PDF Downloads 9129938 Network Analysis and Sex Prediction based on a full Human Brain Connectome
Authors: Oleg Vlasovets, Fabian Schaipp, Christian L. Mueller
Abstract:
we conduct a network analysis and predict the sex of 1000 participants based on ”connectome” - pairwise Pearson’s correlation across 436 brain parcels. We solve the non-smooth convex optimization problem, known under the name of Graphical Lasso, where the solution includes a low-rank component. With this solution and machine learning model for a sex prediction, we explain the brain parcels-sex connectivity patterns.Keywords: network analysis, neuroscience, machine learning, optimization
Procedia PDF Downloads 149