Search results for: Protein Structure Data.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9858

Search results for: Protein Structure Data.

8568 Digital filters for Hot-Mix Asphalt Complex Modulus Test Data Using Genetic Algorithm Strategies

Authors: Madhav V. Chitturi, Anshu Manik, Kasthurirangan Gopalakrishnan

Abstract:

The dynamic or complex modulus test is considered to be a mechanistically based laboratory test to reliably characterize the strength and load-resistance of Hot-Mix Asphalt (HMA) mixes used in the construction of roads. The most common observation is that the data collected from these tests are often noisy and somewhat non-sinusoidal. This hampers accurate analysis of the data to obtain engineering insight. The goal of the work presented in this paper is to develop and compare automated evolutionary computational techniques to filter test noise in the collection of data for the HMA complex modulus test. The results showed that the Covariance Matrix Adaptation-Evolutionary Strategy (CMA-ES) approach is computationally efficient for filtering data obtained from the HMA complex modulus test.

Keywords: HMA, dynamic modulus, GA, evolutionarycomputation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1571
8567 Monitoring the Drying and Grinding Process during Production of Celitement through a NIR-Spectroscopy Based Approach

Authors: Carolin Lutz, Jörg Matthes, Patrick Waibel, Ulrich Precht, Krassimir Garbev, Günter Beuchle, Uwe Schweike, Peter Stemmermann, Hubert B. Keller

Abstract:

Online measurement of the product quality is a challenging task in cement production, especially in the production of Celitement, a novel environmentally friendly hydraulic binder. The mineralogy and chemical composition of clinker in ordinary Portland cement production is measured by X-ray diffraction (XRD) and X-ray fluorescence (XRF), where only crystalline constituents can be detected. But only a small part of the Celitement components can be measured via XRD, because most constituents have an amorphous structure. This paper describes the development of algorithms suitable for an on-line monitoring of the final processing step of Celitement based on NIR-data. For calibration intermediate products were dried at different temperatures and ground for variable durations. The products were analyzed using XRD and thermogravimetric analyses together with NIR-spectroscopy to investigate the dependency between the drying and the milling processes on one and the NIR-signal on the other side. As a result, different characteristic parameters have been defined. A short overview of the Celitement process and the challenging tasks of the online measurement and evaluation of the product quality will be presented. Subsequently, methods for systematic development of near-infrared calibration models and the determination of the final calibration model will be introduced. The application of the model on experimental data illustrates that NIR-spectroscopy allows for a quick and sufficiently exact determination of crucial process parameters.

Keywords: Calibration model, celitement, cementitious material, NIR spectroscopy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1730
8566 A Survey on Quasi-Likelihood Estimation Approaches for Longitudinal Set-ups

Authors: Naushad Mamode Khan

Abstract:

The Com-Poisson (CMP) model is one of the most popular discrete generalized linear models (GLMS) that handles both equi-, over- and under-dispersed data. In longitudinal context, an integer-valued autoregressive (INAR(1)) process that incorporates covariate specification has been developed to model longitudinal CMP counts. However, the joint likelihood CMP function is difficult to specify and thus restricts the likelihood-based estimating methodology. The joint generalized quasi-likelihood approach (GQL-I) was instead considered but is rather computationally intensive and may not even estimate the regression effects due to a complex and frequently ill-conditioned covariance structure. This paper proposes a new GQL approach for estimating the regression parameters (GQL-III) that is based on a single score vector representation. The performance of GQL-III is compared with GQL-I and separate marginal GQLs (GQL-II) through some simulation experiments and is proved to yield equally efficient estimates as GQL-I and is far more computationally stable.

Keywords: Longitudinal, Com-Poisson, Ill-conditioned, INAR(1), GLMS, GQL.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1776
8565 The Feasibility of Augmenting an Augmented Reality Image Card on a Quick Response Code

Authors: Alfred Chen, Shr Yu Lu, Cong Seng Hong, Yur-June Wang

Abstract:

This research attempts to study the feasibility of augmenting an augmented reality (AR) image card on a Quick Response (QR) code. The authors have developed a new visual tag, which contains a QR code and an augmented AR image card. The new visual tag has features of reading both of the revealed data of the QR code and the instant data from the AR image card. Furthermore, a handheld communicating device is used to read and decode the new visual tag, and then the concealed data of the new visual tag can be revealed and read through its visual display. In general, the QR code is designed to store the corresponding data or, as a key, to access the corresponding data from the server through internet. Those reveled data from the QR code are represented in text. Normally, the AR image card is designed to store the corresponding data in 3-Dimensional or animation/video forms. By using QR code's property of high fault tolerant rate, the new visual tag can access those two different types of data by using a handheld communicating device. The new visual tag has an advantage of carrying much more data than independent QR code or AR image card. The major findings of this research are: 1) the most efficient area for the designed augmented AR card augmenting on the QR code is 9% coverage area out of the total new visual tag-s area, and 2) the best location for the augmented AR image card augmenting on the QR code is located in the bottom-right corner of the new visual tag.

Keywords: Augmented reality, QR code, Visual tag, Handheldcommunicating device

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1556
8564 Clustering of Variables Based On a Probabilistic Approach Defined on the Hypersphere

Authors: Paulo Gomes, Adelaide Figueiredo

Abstract:

We consider n individuals described by p standardized variables, represented by points of the surface of the unit hypersphere Sn-1. For a previous choice of n individuals we suppose that the set of observables variables comes from a mixture of bipolar Watson distribution defined on the hypersphere. EM and Dynamic Clusters algorithms are used for identification of such mixture. We obtain estimates of parameters for each Watson component and then a partition of the set of variables into homogeneous groups of variables. Additionally we will present a factor analysis model where unobservable factors are just the maximum likelihood estimators of Watson directional parameters, exactly the first principal component of data matrix associated to each group previously identified. Such alternative model it will yield us to directly interpretable solutions (simple structure), avoiding factors rotations.

Keywords: Dynamic Clusters algorithm, EM algorithm, Factor analysis model, Hierarchical Clustering, Watson distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1624
8563 A Competitive Replica Placement Methodology for Ad Hoc Networks

Authors: Samee Ullah Khan, C. Ardil

Abstract:

In this paper, a mathematical model for data object replication in ad hoc networks is formulated. The derived model is general, flexible and adaptable to cater for various applications in ad hoc networks. We propose a game theoretical technique in which players (mobile hosts) continuously compete in a non-cooperative environment to improve data accessibility by replicating data objects. The technique incorporates the access frequency from mobile hosts to each data object, the status of the network connectivity, and communication costs. The proposed technique is extensively evaluated against four well-known ad hoc network replica allocation methods. The experimental results reveal that the proposed approach outperforms the four techniques in both the execution time and solution quality

Keywords: Data replication, auctions, static allocation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1402
8562 Multidimensional Data Mining by Means of Randomly Travelling Hyper-Ellipsoids

Authors: Pavel Y. Tabakov, Kevin Duffy

Abstract:

The present study presents a new approach to automatic data clustering and classification problems in large and complex databases and, at the same time, derives specific types of explicit rules describing each cluster. The method works well in both sparse and dense multidimensional data spaces. The members of the data space can be of the same nature or represent different classes. A number of N-dimensional ellipsoids are used for enclosing the data clouds. Due to the geometry of an ellipsoid and its free rotation in space the detection of clusters becomes very efficient. The method is based on genetic algorithms that are used for the optimization of location, orientation and geometric characteristics of the hyper-ellipsoids. The proposed approach can serve as a basis for the development of general knowledge systems for discovering hidden knowledge and unexpected patterns and rules in various large databases.

Keywords: Classification, clustering, data minig, genetic algorithms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1773
8561 Predictions Using Data Mining and Case-based Reasoning: A Case Study for Retinopathy

Authors: Vimala Balakrishnan, Mohammad R. Shakouri, Hooman Hoodeh, Loo, Huck-Soo

Abstract:

Diabetes is one of the high prevalence diseases worldwide with increased number of complications, with retinopathy as one of the most common one. This paper describes how data mining and case-based reasoning were integrated to predict retinopathy prevalence among diabetes patients in Malaysia. The knowledge base required was built after literature reviews and interviews with medical experts. A total of 140 diabetes patients- data were used to train the prediction system. A voting mechanism selects the best prediction results from the two techniques used. It has been successfully proven that both data mining and case-based reasoning can be used for retinopathy prediction with an improved accuracy of 85%.

Keywords: Case-Based Reasoning, Data Mining, Prediction, Retinopathy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3022
8560 Zero Truncated Strict Arcsine Model

Authors: Y. N. Phang, E. F. Loh

Abstract:

The zero truncated model is usually used in modeling count data without zero. It is the opposite of zero inflated model. Zero truncated Poisson and zero truncated negative binomial models are discussed and used by some researchers in analyzing the abundance of rare species and hospital stay. Zero truncated models are used as the base in developing hurdle models. In this study, we developed a new model, the zero truncated strict arcsine model, which can be used as an alternative model in modeling count data without zero and with extra variation. Two simulated and one real life data sets are used and fitted into this developed model. The results show that the model provides a good fit to the data. Maximum likelihood estimation method is used in estimating the parameters.

Keywords: Hurdle models, maximum likelihood estimation method, positive count data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1857
8559 Molecular Dynamics and Circular Dichroism Studies on Aurein 1.2 and Retro Analog

Authors: Safyeh Soufian, Hoosein Naderi-Manesh, Abdoali Alizadeh, Mohammad Nabi Sarbolouki

Abstract:

Aurein 1.2 is a 13-residue amphipathic peptide with antibacterial and anticancer activity. Aurein1.2 and its retro analog were synthesized to study the activity of the peptides in relation to their structure. The antibacterial test result showed the retro-analog is inactive. The secondary structural analysis by CD spectra indicated that both of the peptides at TFE/Water adopt alpha-helical conformation. MD simulation was performed on aurein 1.2 and retro-analog in water and TFE in order to analyse the factors that are involved in the activity difference between retro and the native peptide. The simulation results are discussed and validated in the light of experimental data from the CD experiment. Both of the peptides showed a relatively similar pattern for their hydrophobicity, hydrophilicity, solvent accessible surfaces, and solvent accessible hydrophobic surfaces. However, they showed different in directions of dipole moment of peptides. Also, Our results further indicate that the reversion of the amino acid sequence affects flexibility .The data also showed that factors causing structural rigidity may decrease the activity. Consequently, our finding suggests that in the case of sequence-reversed peptide strategy, one has to pay attention to the role of amino acid sequence order in making flexibility and role of dipole moment direction in peptide activity. KeywordsAntimicrobial peptides, retro, molecular dynamic, circular dichroism.

Keywords: Antimicrobial peptides, retro, molecular dynamic, circular dichroism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1850
8558 Li-Fi Technology: Data Transmission through Visible Light

Authors: Shahzad Hassan, Kamran Saeed

Abstract:

People are always in search of Wi-Fi hotspots because Internet is a major demand nowadays. But like all other technologies, there is still room for improvement in the Wi-Fi technology with regards to the speed and quality of connectivity. In order to address these aspects, Harald Haas, a professor at the University of Edinburgh, proposed what we know as the Li-Fi (Light Fidelity). Li-Fi is a new technology in the field of wireless communication to provide connectivity within a network environment. It is a two-way mode of wireless communication using light. Basically, the data is transmitted through Light Emitting Diodes which can vary the intensity of light very fast, even faster than the blink of an eye. From the research and experiments conducted so far, it can be said that Li-Fi can increase the speed and reliability of the transfer of data. This paper pays particular attention on the assessment of the performance of this technology. In other words, it is a 5G technology which uses LED as the medium of data transfer. For coverage within the buildings, Wi-Fi is good but Li-Fi can be considered favorable in situations where large amounts of data are to be transferred in areas with electromagnetic interferences. It brings a lot of data related qualities such as efficiency, security as well as large throughputs to the table of wireless communication. All in all, it can be said that Li-Fi is going to be a future phenomenon where the presence of light will mean access to the Internet as well as speedy data transfer.

Keywords: Communication, LED, Li-Fi, Wi-Fi.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2170
8557 Authorization of Commercial Communication Satellite Grounds for Promoting Turkish Data Relay System

Authors: Celal Dudak, Aslı Utku, Burak Yağlioğlu

Abstract:

Uninterrupted and continuous satellite communication through the whole orbit time is becoming more indispensable every day. Data relay systems are developed and built for various high/low data rate information exchanges like TDRSS of USA and EDRSS of Europe. In these missions, a couple of task-dedicated communication satellites exist. In this regard, for Turkey a data relay system is attempted to be defined exchanging low data rate information (i.e. TTC) for Earth-observing LEO satellites appointing commercial GEO communication satellites all over the world. First, justification of this attempt is given, demonstrating duration enhancements in the link. Discussion of preference of RF communication is, also, given instead of laser communication. Then, preferred communication GEOs – including TURKSAT4A already belonging to Turkey- are given, together with the coverage enhancements through STK simulations and the corresponding link budget. Also, a block diagram of the communication system is given on the LEO satellite.

Keywords: Communication, satellite, data relay system, coverage.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1416
8556 Sex Differences in Thyroid Gland Structure of Rabbits

Authors: Parchami A., Fatahian Dehkordi RF.

Abstract:

The aim of the present investigation was to compare sex differences in thyroid gland structure of rabbits. Five adult male and five adult female (3.1-3.5 kg body weight) New Zealand white rabbits were used in the experiment. Results showed that at light microscopic level, there was no sex difference in microscopic appearance of the thyroid glands. At electron microscopic level, however, the mitochondria and the microvilli of the follicular cells are more numerous and the Golgi complex is also more extensive in male rabbits in comparison to females. Results obtained from micrometric measurements showed that the volume density of the follicles is higher in males than in females, but the differences are not statistically significant .The volume density of epithelium and the height of follicular cells are significantly greater in males than in females and reverse is true about the volume density of interstitium (p<0.05). The volume density of colloid is also greater in females (66±6) than in males (60±7) but the differences are not statistically significant .It was concluded that sex has limited effects on histomorphometric properties of thyroid gland in rabbits.

Keywords: Rabbit, Thyroid Gland, Sex difference, Electron microscope

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2405
8555 The Effect of Laser Surface Melting on the Microstructure and Mechanical Properties of Low Carbon Steel

Authors: Suleiman M. Elhamali, K. M. Etmimi, A. Usha

Abstract:

The paper presents the results of microhardness and microstructure of low carbon steel surface melted using carbon dioxide laser with a wavelength of 10.6μm and a maximum output power of 2000W. The processing parameters such as the laser power, and the scanning rate were investigated in this study. After surface melting two distinct regions formed corresponding to the melted zone MZ, and the heat affected zone HAZ. The laser melted region displayed a cellular fine structures while the HAZ displayed martensite or bainite structure. At different processing parameters, the original microstructure of this steel (Ferrite+Pearlite) has been transformed to new phases of martensitic and bainitic structures. The fine structure and the high microhardness are evidence of the high cooling rates which follow the laser melting. The melting pool and the transformed microstructure in the laser surface melted region of carbon steel showed clear dependence on laser power and scanning rate.

Keywords: Carbon steel, laser surface melting, microstructure, microhardness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2558
8554 Anomaly Detection in a Data Center with a Reconstruction Method Using a Multi-Autoencoders Model

Authors: Victor Breux, Jérôme Boutet, Alain Goret, Viviane Cattin

Abstract:

Early detection of anomalies in data centers is important to reduce downtimes and the costs of periodic maintenance. However, there is little research on this topic and even fewer on the fusion of sensor data for the detection of abnormal events. The goal of this paper is to propose a method for anomaly detection in data centers by combining sensor data (temperature, humidity, power) and deep learning models. The model described in the paper uses one autoencoder per sensor to reconstruct the inputs. The auto-encoders contain Long-Short Term Memory (LSTM) layers and are trained using the normal samples of the relevant sensors selected by correlation analysis. The difference signal between the input and its reconstruction is then used to classify the samples using feature extraction and a random forest classifier. The data measured by the sensors of a data center between January 2019 and May 2020 are used to train the model, while the data between June 2020 and May 2021 are used to assess it. Performances of the model are assessed a posteriori through F1-score by comparing detected anomalies with the data center’s history. The proposed model outperforms the state-of-the-art reconstruction method, which uses only one autoencoder taking multivariate sequences and detects an anomaly with a threshold on the reconstruction error, with an F1-score of 83.60% compared to 24.16%.

Keywords: Anomaly detection, autoencoder, data centers, deep learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 744
8553 Effect of Enzyme and Heat Pretreatment on Sunflower Oil Recovery Using Aqueous and Hexane Extractions

Authors: E. Danso-Boateng

Abstract:

The effects of enzyme action and heat pretreatment on oil extraction yield from sunflower kernels were analysed using hexane extraction with Soxhlet, and aqueous extraction with incubator shaker. Ground kernels of raw and heat treated kernels, each with and without Viscozyme treatment were used. Microscopic images of the kernels were taken to analyse the visible effects of each treatment on the cotyledon cell structure of the kernels. Heat pretreated kernels before both extraction processes produced enhanced oil extraction yields than the control, with steam explosion the most efficient. In hexane extraction, applying a combination of steam explosion and Viscozyme treatments to the kernels before the extraction gave the maximum oil extractable in 1 hour; while for aqueous extraction, raw kernels treated with Viscozyme gave the highest oil extraction yield. Remarkable cotyledon cell disruption was evident in kernels treated with Viscozyme; whereas steam explosion and conventional heat treated kernels had similar effects.

Keywords: Enzyme-assisted aqueous and hexane extraction, heatpretreatment, sunflower cotyledon structure, sunflower oil extraction

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3474
8552 AnQL: A Query Language for Annotation Documents

Authors: Neerja Bhatnagar, Ben A. Juliano, Renee S. Renner

Abstract:

This paper presents data annotation models at five levels of granularity (database, relation, column, tuple, and cell) of relational data to address the problem of unsuitability of most relational databases to express annotations. These models do not require any structural and schematic changes to the underlying database. These models are also flexible, extensible, customizable, database-neutral, and platform-independent. This paper also presents an SQL-like query language, named Annotation Query Language (AnQL), to query annotation documents. AnQL is simple to understand and exploits the already-existent wide knowledge and skill set of SQL.

Keywords: Annotation query language, data annotations, data annotation models, semantic data annotations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1846
8551 Machine Learning-Enabled Classification of Climbing Using Small Data

Authors: Nicholas Milburn, Yu Liang, Dalei Wu

Abstract:

Athlete performance scoring within the climbing domain presents interesting challenges as the sport does not have an objective way to assign skill. Assessing skill levels within any sport is valuable as it can be used to mark progress while training, and it can help an athlete choose appropriate climbs to attempt. Machine learning-based methods are popular for complex problems like this. The dataset available was composed of dynamic force data recorded during climbing; however, this dataset came with challenges such as data scarcity, imbalance, and it was temporally heterogeneous. Investigated solutions to these challenges include data augmentation, temporal normalization, conversion of time series to the spectral domain, and cross validation strategies. The investigated solutions to the classification problem included light weight machine classifiers KNN and SVM as well as the deep learning with CNN. The best performing model had an 80% accuracy. In conclusion, there seems to be enough information within climbing force data to accurately categorize climbers by skill.

Keywords: Classification, climbing, data imbalance, data scarcity, machine learning, time sequence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 567
8550 An Improved STBC Structure and Transmission Scheme for High Rate and Reliability in OFDMA Cooperative Communication

Authors: Hyoung-Muk Lim, Won-Jun Choi, Jae-Seon Yoon, Hyoung-Kyu Song

Abstract:

Space-time block code(STBC) has been studied to get full diversity and full rate in multiple input multiple output(MIMO) system. Achieving full rate is difficult in cooperative communications due to the each user consumes the time slots for transmitting information in cooperation phase. So combining MIMO systems with cooperative communications has been researched for full diversity and full rate. In orthogonal frequency division multiple access (OFDMA) system, it is an alternative way that each user shares their allocated subchannels instead of using the MIMO system to improve the transmission rate. In this paper, a Decode-and-forward (DF) based cooperative communication scheme is proposed. The proposed scheme has improved transmission rate and reliability in multi-path fading channel of the OFDMA up-link condition by modified STBC structure and subchannel sharing.

Keywords: cooperation, improved rate, OFDMA, STBC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1591
8549 FCNN-MR: A Parallel Instance Selection Method Based on Fast Condensed Nearest Neighbor Rule

Authors: Lu Si, Jie Yu, Shasha Li, Jun Ma, Lei Luo, Qingbo Wu, Yongqi Ma, Zhengji Liu

Abstract:

Instance selection (IS) technique is used to reduce the data size to improve the performance of data mining methods. Recently, to process very large data set, several proposed methods divide the training set into some disjoint subsets and apply IS algorithms independently to each subset. In this paper, we analyze the limitation of these methods and give our viewpoint about how to divide and conquer in IS procedure. Then, based on fast condensed nearest neighbor (FCNN) rule, we propose a large data sets instance selection method with MapReduce framework. Besides ensuring the prediction accuracy and reduction rate, it has two desirable properties: First, it reduces the work load in the aggregation node; Second and most important, it produces the same result with the sequential version, which other parallel methods cannot achieve. We evaluate the performance of FCNN-MR on one small data set and two large data sets. The experimental results show that it is effective and practical.

Keywords: Instance selection, data reduction, MapReduce, kNN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1017
8548 Mapping of Solar Radiation Anomalies Based on Climate Change

Authors: Elison Eduardo Jardim Bierhals, Claudineia Brazil, Francisco Pereira, Elton Rossini

Abstract:

The use of alternative energy sources to meet energy demand reduces environmental damage. To diversify an energy matrix and to minimize global warming, a solar energy is gaining space, being an important source of renewable energy, and its potential depends on the climatic conditions of the region. Brazil presents a great solar potential for a generation of electric energy, so the knowledge of solar radiation and its characteristics are fundamental for the study of energy use. Due to the above reasons, this article aims to verify the climatic variability corresponding to the variations in solar radiation anomalies, in the face of climate change scenarios. The data used in this research are part of the Intercomparison of Interconnected Models, Phase 5 (CMIP5), which contributed to the preparation of the fifth IPCC-AR5 report. The solar radiation data were extracted from The Australian Community Climate and Earth System Simulator (ACCESS) model using the RCP 4.5 and RCP 8.5 scenarios that represent an intermediate structure and a pessimistic framework, the latter being the most worrisome in all cases. In order to allow the use of solar radiation as a source of energy in a given location and/or region, it is important, first, to determine its availability, thus justifying the importance of the study. The results pointed out, for the 75-year period (2026-2100), based on a pessimistic scenario, indicate a drop in solar radiation of the approximately 12% in the eastern region of Rio Grande do Sul. Factors that influence the pessimistic prospects of this scenario should be better observed by the responsible authorities, since they can affect the possibility to produce electricity from solar radiation.

Keywords: Climate change, solar radiation, energy utilization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 992
8547 Optimization of Real Time Measured Data Transmission, Given the Amount of Data Transmitted

Authors: Michal Kopcek, Tomas Skulavik, Michal Kebisek, Gabriela Krizanova

Abstract:

The operation of nuclear power plants involves continuous monitoring of the environment in their area. This monitoring is performed using a complex data acquisition system, which collects status information about the system itself and values of many important physical variables e.g. temperature, humidity, dose rate etc. This paper describes a proposal and optimization of communication that takes place in teledosimetric system between the central control server responsible for the data processing and storing and the decentralized measuring stations, which are measuring the physical variables. Analyzes of ongoing communication were performed and consequently the optimization of the system architecture and communication was done.

Keywords: Communication protocol, transmission optimization, data acquisition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1821
8546 The Effect of Geometrical Ratio and Nanoparticle Reinforcement on the Properties of Al-Based Nanocomposite Hollow Sphere Structures

Authors: M. Amirjan

Abstract:

In the present study, the properties of Al-Al2O3 nanocomposite hollow sphere structures were investigated. For this reason, the Al-based nanocomposite hollow spheres with different amounts of nano-alumina reinforcement (0-10wt %) and different ratio of thickness to diameter (t/D: 0.06-0.3) were prepared via a powder metallurgy method. Then, the effect of mentioned parameters was studied on physical and quasi static mechanical properties of their related prepared structures (open/closed cell) such as density, hardness, strength, and energy absorption. It was found that, as the t/D ratio increases the relative density, compressive strength and energy absorption increase. The highest values of strength and energy absorption were obtained from the specimen with 5 wt. % of nanoparticle reinforcement, t/D of 0.3 (t=1 mm, D=400μm) as 22.88 MPa and 13.24 MJ/m3, respectively. The moderate specific strength of prepared composites in the present study showed the good consistency with the properties of others low carbon steel composite with similar structure.

Keywords: Hollow sphere structure foam, nanocomposite, t/D (thickness, diameter), powder metallurgy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2399
8545 Behavior of the Masonry Infill in Structures Subjected to the Horizontal Loads

Authors: Nawel Mezigheche, Abdelhacine Gouasmia, Allaeddine Athmani, Mouloud Merzoud

Abstract:

Masonry infill walls are inevitable in the selfsupporting structures, but their contribution in the resistance to earthquake loads is generally neglected in the structural analyses. The principal aim of this work through a numerical study of masonry infill walls behavior in structures subjected to horizontal load is to propose by finite elements numerical modeling, a more reliable approach, faster and close to reality. In this study, 3D Finite Element Analysis was developed to study the behavior of masonry infill walls in structures subjected to horizontal load; the finite element software being used was ABAQUS, it is observed that more rigidity of the masonry filling is significant, more the structure is rigid, we can so conclude that the filling brings an additional rigidity to the structure not to be neglected; it is also observed that when the framework is subjected to horizontal loads, the framework separates from the filling on the level of the tended diagonal.

Keywords: Finite element, Masonry infill walls, Rigidity of the masonry, Tended diagonal.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2121
8544 Development of Mechanisms of Value Creation and Risk Management Organization in the Conditions of Transformation of the Economy of Russia

Authors: Mikhail V. Khachaturyan, Inga A. Koryagina, Eugenia V. Klicheva

Abstract:

In modern conditions, scientific judgment of problems in developing mechanisms of value creation and risk management acquires special relevance. Formation of economic knowledge has resulted in the constant analysis of consumer behavior for all players from national and world markets. Effective mechanisms development of the demand analysis, crucial for consumer's characteristics of future production, and the risks connected with the development of this production are the main objectives of control systems in modern conditions. The modern period of economic development is characterized by a high level of globalization of business and rigidity of competition. At the same time, the considerable share of new products and services costs has a non-material intellectual nature. The most successful in Russia is the contemporary development of small innovative firms. Such firms, through their unique technologies and new approaches to process management, which form the basis of their intellectual capital, can show flexibility and succeed in the market. As a rule, such enterprises should have very variable structure excluding the tough scheme of submission and demanding essentially new incentives for inclusion of personnel in innovative activity. Realization of similar structures, as well as a new approach to management, can be constructed based on value-oriented management which is directed to gradual change of consciousness of personnel and formation from groups of adherents included in the solution of the general innovative tasks. At the same time, valuable changes can gradually capture not only innovative firm staff, but also the structure of its corporate partners. Introduction of new technologies is the significant factor contributing to the development of new valuable imperatives and acceleration of the changing values systems of the organization. It relates to the fact that new technologies change the internal environment of the organization in a way that the old system of values becomes inefficient in new conditions. Introduction of new technologies often demands change in the structure of employee’s interaction and training in their new principles of work. During the introduction of new technologies and the accompanying change in the value system, the structure of the management of the values of the organization is changing. This is due to the need to attract more staff to justify and consolidate the new value system and bring their view into the motivational potential of the new value system of the organization.

Keywords: Value, risk, creation, problems, organization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 976
8543 Empirical Process Monitoring Via Chemometric Analysis of Partially Unbalanced Data

Authors: Hyun-Woo Cho

Abstract:

Real-time or in-line process monitoring frameworks are designed to give early warnings for a fault along with meaningful identification of its assignable causes. In artificial intelligence and machine learning fields of pattern recognition various promising approaches have been proposed such as kernel-based nonlinear machine learning techniques. This work presents a kernel-based empirical monitoring scheme for batch type production processes with small sample size problem of partially unbalanced data. Measurement data of normal operations are easy to collect whilst special events or faults data are difficult to collect. In such situations, noise filtering techniques can be helpful in enhancing process monitoring performance. Furthermore, preprocessing of raw process data is used to get rid of unwanted variation of data. The performance of the monitoring scheme was demonstrated using three-dimensional batch data. The results showed that the monitoring performance was improved significantly in terms of detection success rate of process fault.

Keywords: Process Monitoring, kernel methods, multivariate filtering, data-driven techniques, quality improvement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1746
8542 A Critical Approach to Modern Conception in the Context of Objectivity and Quantitative Method

Authors: Sergun Kurtoglu

Abstract:

The struggle between modern and postmodern understanding is also displayed in terms of the superiorities of quantitative and qualitative methods to each other which are evaluated within the scope of these understandings. By way of assuming that the quantitative researches (modern) are able to account for structure while the qualitative researches (postmodern) explain the process, these methods are turned into a means for worldviews specific to a period. In fact, process is not a functioning independent of structure. In addition to this issue, the ability of quantitative methods to provide scientific knowledge is also controversial so long as they exclude the dialectical method. For this reason, the critiques charged against modernism in terms of quantitative methods are, in a sense, legitimate. Nevertheless, the main issue is in which parameters postmodernist critique tries to legitimize its critiques and whether these parameters represent a point of view enabling democratic solutions. In this respect, the scientific knowledge covered in Turkish media as a means through which ordinary people have access to scientific knowledge will be evaluated by means of content analysis within a new objectivity conception.

Keywords: knowledge and objectivity, dialectic method, qualitative and quantitative methods, modernism/postmodernism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1706
8541 Requirement Engineering and Software Product Line Scoping Paradigm

Authors: Ahmed Mateen, Zhu Qingsheng, Faisal Shahzad

Abstract:

Requirement Engineering (RE) is a part being created for programming structure during the software development lifecycle. Software product line development is a new topic area within the domain of software engineering. It also plays important role in decision making and it is ultimately helpful in rising business environment for productive programming headway. Decisions are central to engineering processes and they hold them together. It is argued that better decisions will lead to better engineering. To achieve better decisions requires that they are understood in detail. In order to address the issues, companies are moving towards Software Product Line Engineering (SPLE) which helps in providing large varieties of products with minimum development effort and cost. This paper proposed a new framework for software product line and compared with other models. The results can help to understand the needs in SPL testing, by identifying points that still require additional investigation. In our future scenario, we will combine this model in a controlled environment with industrial SPL projects which will be the new horizon for SPL process management testing strategies.

Keywords: Requirements engineering, software product lines, scoping, process structure, domain specific language.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 828
8540 Evaluation of Risk Attributes Driven by Periodically Changing System Functionality

Authors: Dariusz Dymek, Leszek Kotulski

Abstract:

Modeling of the distributed systems allows us to represent the whole its functionality. The working system instance rarely fulfils the whole functionality represented by model; usually some parts of this functionality should be accessible periodically. The reporting system based on the Data Warehouse concept seams to be an intuitive example of the system that some of its functionality is required only from time to time. Analyzing an enterprise risk associated with the periodical change of the system functionality, we should consider not only the inaccessibility of the components (object) but also their functions (methods), and the impact of such a situation on the system functionality from the business point of view. In the paper we suggest that the risk attributes should be estimated from risk attributes specified at the requirements level (Use Case in the UML model) on the base of the information about the structure of the model (presented at other levels of the UML model). We argue that it is desirable to consider the influence of periodical changes in requirements on the enterprise risk estimation. Finally, the proposition of such a solution basing on the UML system model is presented.

Keywords: Risk assessing, software maintenance, UML, graph grammars.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1385
8539 A Comparison of Image Data Representations for Local Stereo Matching

Authors: André Smith, Amr Abdel-Dayem

Abstract:

The stereo matching problem, while having been present for several decades, continues to be an active area of research. The goal of this research is to find correspondences between elements found in a set of stereoscopic images. With these pairings, it is possible to infer the distance of objects within a scene, relative to the observer. Advancements in this field have led to experimentations with various techniques, from graph-cut energy minimization to artificial neural networks. At the basis of these techniques is a cost function, which is used to evaluate the likelihood of a particular match between points in each image. While at its core, the cost is based on comparing the image pixel data; there is a general lack of consistency as to what image data representation to use. This paper presents an experimental analysis to compare the effectiveness of more common image data representations. The goal is to determine the effectiveness of these data representations to reduce the cost for the correct correspondence relative to other possible matches.

Keywords: Colour data, local stereo matching, stereo correspondence, disparity map.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 916