Search results for: matrix fraction description
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3963

Search results for: matrix fraction description

1563 A Study on the Different Components of a Typical Back-Scattered Chipless RFID Tag Reflection

Authors: Fatemeh Babaeian, Nemai Chandra Karmakar

Abstract:

Chipless RFID system is a wireless system for tracking and identification which use passive tags for encoding data. The advantage of using chipless RFID tag is having a planar tag which is printable on different low-cost materials like paper and plastic. The printed tag can be attached to different items in the labelling level. Since the price of chipless RFID tag can be as low as a fraction of a cent, this technology has the potential to compete with the conventional optical barcode labels. However, due to the passive structure of the tag, data processing of the reflection signal is a crucial challenge. The captured reflected signal from a tag attached to an item consists of different components which are the reflection from the reader antenna, the reflection from the item, the tag structural mode RCS component and the antenna mode RCS of the tag. All these components are summed up in both time and frequency domains. The effect of reflection from the item and the structural mode RCS component can distort/saturate the frequency domain signal and cause difficulties in extracting the desired component which is the antenna mode RCS. Therefore, it is required to study the reflection of the tag in both time and frequency domains to have a better understanding of the nature of the captured chipless RFID signal. The other benefits of this study can be to find an optimised encoding technique in tag design level and to find the best processing algorithm the chipless RFID signal in decoding level. In this paper, the reflection from a typical backscattered chipless RFID tag with six resonances is analysed, and different components of the signal are separated in both time and frequency domains. Moreover, the time domain signal corresponding to each resonator of the tag is studied. The data for this processing was captured from simulation in CST Microwave Studio 2017. The outcome of this study is understanding different components of a measured signal in a chipless RFID system and a discovering a research gap which is a need to find an optimum detection algorithm for tag ID extraction.

Keywords: antenna mode RCS, chipless RFID tag, resonance, structural mode RCS

Procedia PDF Downloads 200
1562 Damage in Cementitious Materials Exposed to Sodium Chloride Solution and Thermal Cycling: The Effect of Using Supplementary Cementitious Materials

Authors: Fadi Althoey, Yaghoob Farnam

Abstract:

Sodium chloride (NaCl) can interact with the tricalcium aluminate (C3A) and its hydrates in concrete matrix. This interaction can result in formation of a harmful chemical phase as the temperature changes. It is thought that this chemical phase is embroiled in the premature concrete deterioration in the cold regions. This work examines the potential formation of the harmful chemical phase in various pastes prepared by using different types of ordinary portland cement (OPC) and supplementary cementitious materials (SCMs). The quantification of the chemical phase was done by using a low temperature differential scanning calorimetry. The results showed that the chemical phase formation can be reduced by using Type V cement (low content of C3A). The use of SCMs showed different behaviors on the formation of the chemical phase. Slag and Class F fly ash can reduce the chemical phase by the dilution of cement whereas silica fume can reduce the amount of the chemical phase by dilution and pozzolanic activates. Interestingly, the use of Class C fly ash has a negative effect on concrete exposed to NaCl through increasing the formation of the chemical phase.

Keywords: concrete, damage, chemcial phase, NaCl, SCMs

Procedia PDF Downloads 144
1561 Clinical Evaluation of Neutrophil to Lymphocytes Ratio and Platelets to Lymphocytes Ratio in Immune Thrombocytopenic Purpura

Authors: Aisha Arshad, Samina Naz Mukry, Tahir Shamsi

Abstract:

Background: Immune thrombocytopenia (ITP) is an autoimmune disorder. Besides platelets counts, immature platelets fraction (IPF) can be used as tool to predict megakaryocytic activity in ITP patients. The clinical biomarkers like Neutrophils to lymphocytes ratio (NLR) and platelet to lymphocytes ratio(PLR) predicts inflammation and can be used as prognostic markers.The present study was planned to assess the ratios in ITP and their utility in predicting prognosis after treatment. Methods: A total of 111 patients of ITP with same number of healthy individuals were included in this case control study during the period of January 2015 to December 2017.All the ITP patients were grouped according to guidelines of International working group of ITP. A 3cc blood was collected in EDTA tube and blood parameters were evaluated using Sysmex 1000 analyzer.The ratios were calculated by using absolute counts of Neutrophils,Lymphocytes and platelets.The significant (p=<0.05) difference between ITP patients and healthy control groups was determined by Kruskal wallis test, Dunn’s test and spearman’s correlation test was done using SPSS version 23. Results: The significantly raised total leucocytes counts (TLC) and IPF along with low platelets counts were observed in ITP patients as compared to healthy controls.In ITP groups,very low platelet count with median and IQR of 2(3.8)3x109/l with highest mean and IQR IPF 25.4(19.8)% was observed in newly diagnosed ITP group. The NLR was high with prognosis of disease as higher levels were observed in P-ITP. The PLR was significantly low in ND-ITP ,P-ITP, C-ITP, R-ITP and compared to controls with p=<0.001 as platelet were less in number in all ITP patients. Conclusion: The IPF can be used in evaluation of bone marrow response in ITP. The simple, reliable and calculated NLR and PLR ratios can be used in predicting prognosis and response to treatment in ITP and to some extend the severity of disease.

Keywords: neutrophils, platelets, lymphocytes, infection

Procedia PDF Downloads 96
1560 Evaluation of Model-Based Code Generation for Embedded Systems–Mature Approach for Development in Evolution

Authors: Nikolay P. Brayanov, Anna V. Stoynova

Abstract:

Model-based development approach is gaining more support and acceptance. Its higher abstraction level brings simplification of systems’ description that allows domain experts to do their best without particular knowledge in programming. The different levels of simulation support the rapid prototyping, verifying and validating the product even before it exists physically. Nowadays model-based approach is beneficial for modelling of complex embedded systems as well as a generation of code for many different hardware platforms. Moreover, it is possible to be applied in safety-relevant industries like automotive, which brings extra automation of the expensive device certification process and especially in the software qualification. Using it, some companies report about cost savings and quality improvements, but there are others claiming no major changes or even about cost increases. This publication demonstrates the level of maturity and autonomy of model-based approach for code generation. It is based on a real live automotive seat heater (ASH) module, developed using The Mathworks, Inc. tools. The model, created with Simulink, Stateflow and Matlab is used for automatic generation of C code with Embedded Coder. To prove the maturity of the process, Code generation advisor is used for automatic configuration. All additional configuration parameters are set to auto, when applicable, leaving the generation process to function autonomously. As a result of the investigation, the publication compares the quality of generated embedded code and a manually developed one. The measurements show that generally, the code generated by automatic approach is not worse than the manual one. A deeper analysis of the technical parameters enumerates the disadvantages, part of them identified as topics for our future work.

Keywords: embedded code generation, embedded C code quality, embedded systems, model-based development

Procedia PDF Downloads 244
1559 Design and Creation of a BCI Videogame for Training and Measure of Sustained Attention in Children with ADHD

Authors: John E. Muñoz, Jose F. Lopez, David S. Lopez

Abstract:

Attention Deficit Hyperactivity Disorder (ADHD) is a disorder that affects 1 out of 5 Colombian children, converting into a real public health problem in the country. Conventional treatments such as medication and neuropsychological therapy have been proved to be insufficient in order to decrease high incidence levels of ADHD in the principal Colombian cities. This work demonstrates a design and development of a videogame that uses a brain computer interface not only to serve as an input device but also as a tool to monitor neurophysiologic signal. The video game named “The Harvest Challenge” puts a cultural scene of a Colombian coffee grower in its context, where a player can use his/her avatar in three mini games created in order to reinforce four fundamental aspects: i) waiting ability, ii) planning ability, iii) ability to follow instructions and iv) ability to achieve objectives. The details of this collaborative designing process of the multimedia tool according to the exact clinic necessities and the description of interaction proposals are presented through the mental stages of attention and relaxation. The final videogame is presented as a tool for sustained attention training in children with ADHD using as an action mechanism the neuromodulation of Beta and Theta waves through an electrode located in the central part of the front lobe of the brain. The processing of an electroencephalographic signal is produced automatically inside the videogame allowing to generate a report of the theta/beta ratio evolution - a biological marker, which has been demonstrated to be a sufficient measure to discriminate of children with deficit and without.

Keywords: BCI, neuromodulation, ADHD, videogame, neurofeedback, theta/beta ratio

Procedia PDF Downloads 372
1558 Polymer-Nanographite Nanocomposites for Biosensor Applications

Authors: Payal Mazumdar, Sunita Rattan, Monalisa Mukherjee

Abstract:

Polymer nanocomposites are a special class of materials having unique properties and wide application in diverse areas such as EMI shielding, sensors, photovoltaic cells, membrane separation properties, drug delivery etc. Recently the nanocomposites are being investigated for their use in biomedical fields as biosensors. Though nanocomposites with carbon nanoparticles have received worldwide attention in the past few years, comparatively less work has been done on nanographite although it has in-plane electrical, thermal and mechanical properties comparable to that of carbon nanotubes. The main challenge in the fabrication of these nanocomposites lies in the establishment of homogeneous dispersion of nanographite in polymer matrix. In the present work, attempts have been made to synthesize the nanocomposites of polystyrene and nanographite using click chemistry. The polymer and the nanographite are functionalized prior to the formation of nanocomposites. The polymer, polystyrene, was functionalized with alkyne moeity and nanographite with azide moiety. The fabricating of the nanocomposites was accomplished through click chemistry using Cu (I)-catalyzed Huisgen dipolar cycloaddition. The functionalization of filler and polymer was confirmed by NMR and FTIR. The nanocomposites formed by the click chemistry exhibit better electrical properties and the sensors are evaluated for their application as biosensors.

Keywords: nanocomposites, click chemistry, nanographite, biosensor

Procedia PDF Downloads 307
1557 Decision Support System for the Management of the Shandong Peninsula, China

Authors: Natacha Fery, Guilherme L. Dalledonne, Xiangyang Zheng, Cheng Tang, Roberto Mayerle

Abstract:

A Decision Support System (DSS) for supporting decision makers in the management of the Shandong Peninsula has been developed. Emphasis has been given to coastal protection, coastal cage aquaculture and harbors. The investigations were done in the framework of a joint research project funded by the German Ministry of Education and Research (BMBF) and the Chinese Academy of Sciences (CAS). In this paper, a description of the DSS, the development of its components, and results of its application are presented. The system integrates in-situ measurements, process-based models, and a database management system. Numerical models for the simulation of flow, waves, sediment transport and morphodynamics covering the entire Bohai Sea are set up based on the Delft3D modelling suite (Deltares). Calibration and validation of the models were realized based on the measurements of moored Acoustic Doppler Current Profilers (ADCP) and High Frequency (HF) radars. In order to enable cost-effective and scalable applications, a database management system was developed. It enhances information processing, data evaluation, and supports the generation of data products. Results of the application of the DSS to the management of coastal protection, coastal cage aquaculture and harbors are presented here. Model simulations covering the most severe storms observed during the last decades were carried out leading to an improved understanding of hydrodynamics and morphodynamics. Results helped in the identification of coastal stretches subjected to higher levels of energy and improved support for coastal protection measures.

Keywords: coastal protection, decision support system, in-situ measurements, numerical modelling

Procedia PDF Downloads 195
1556 Uniqueness of Fingerprint Biometrics to Human Dynasty: A Review

Authors: Siddharatha Sharma

Abstract:

With the advent of technology and machines, the role of biometrics in society is taking an important place for secured living. Security issues are the major concern in today’s world and continue to grow in intensity and complexity. Biometrics based recognition, which involves precise measurement of the characteristics of living beings, is not a new method. Fingerprints are being used for several years by law enforcement and forensic agencies to identify the culprits and apprehend them. Biometrics is based on four basic principles i.e. (i) uniqueness, (ii) accuracy, (iii) permanency and (iv) peculiarity. In today’s world fingerprints are the most popular and unique biometrics method claiming a social benefit in the government sponsored programs. A remarkable example of the same is UIDAI (Unique Identification Authority of India) in India. In case of fingerprint biometrics the matching accuracy is very high. It has been observed empirically that even the identical twins also do not have similar prints. With the passage of time there has been an immense progress in the techniques of sensing computational speed, operating environment and the storage capabilities and it has become more user convenient. Only a small fraction of the population may be unsuitable for automatic identification because of genetic factors, aging, environmental or occupational reasons for example workers who have cuts and bruises on their hands which keep fingerprints changing. Fingerprints are limited to human beings only because of the presence of volar skin with corrugated ridges which are unique to this species. Fingerprint biometrics has proved to be a high level authentication system for identification of the human beings. Though it has limitations, for example it may be inefficient and ineffective if ridges of finger(s) or palm are moist authentication becomes difficult. This paper would focus on uniqueness of fingerprints to the human beings in comparison to other living beings and review the advancement in emerging technologies and their limitations.

Keywords: fingerprinting, biometrics, human beings, authentication

Procedia PDF Downloads 325
1555 Application of Constructivist-Based (5E’s) Instructional Approach on Pupils’ Retention: A Case Study in Primary Mathematics in Enugu State

Authors: Ezeamagu M.U, Madu B.C

Abstract:

This study was designed to investigate the efficacy of 5Es constructivist-based instructional model on students’ retention in primary Mathematics. 5Es stands for Engagement, Exploration, Explanation, Elaboration and Evaluation. The study adopted the pre test post test non-equivalent control group quasi-experimental research design. The sample size for the study was one hundred and thirty four pupils (134), seventy six male (76) and fifty eight female (58) from two primary schools in Nsukka education zone. Two intact classes in each of the sampled schools comprising all the primary four pupils were used. Each of the schools was given the opportunity of being assigned randomly to either experimental or control group. The Experimental group was taught using 5Es model while the control group was taught using the conventional method. Two research questions were formulated to guide the study and three hypotheses were tested at p ≤ 0. 05. A Fraction Achievement Test (FAT) of ten (10) questions were used to obtain data on pupils’ retention. Research questions were answered using mean and standard deviation while hypotheses were tested using analysis of covariance (ANCOVA). The result revealed that the 5Es model was more effective than the conventional method of teaching in enhancing pupils’ performance and retention in mathematics, secondly there is no significant difference in the mean retention scores of male and female students taught using 5Es instructional model. Based on the findings, it was recommended among other things, that the 5Es instructional model should be adopted in the teaching of mathematics in primary level of the educational system. Seminar, workshops and conferences should be mounted by professional bodies, federal and state ministries of education on the use of 5Es model. This will enable the mathematics educator, serving teachers, students and all to benefit from the approach.

Keywords: constructivist, education, mathematics, primary, retention

Procedia PDF Downloads 451
1554 Analysis, Evaluation and Optimization of Food Management: Minimization of Food Losses and Food Wastage along the Food Value Chain

Authors: G. Hafner

Abstract:

A method developed at the University of Stuttgart will be presented: ‘Analysis, Evaluation and Optimization of Food Management’. A major focus is represented by quantification of food losses and food waste as well as their classification and evaluation regarding a system optimization through waste prevention. For quantification and accounting of food, food losses and food waste along the food chain, a clear definition of core terms is required at the beginning. This includes their methodological classification and demarcation within sectors of the food value chain. The food chain is divided into agriculture, industry and crafts, trade and consumption (at home and out of home). For adjustment of core terms, the authors have cooperated with relevant stakeholders in Germany for achieving the goal of holistic and agreed definitions for the whole food chain. This includes modeling of sub systems within the food value chain, definition of terms, differentiation between food losses and food wastage as well as methodological approaches. ‘Food Losses’ and ‘Food Wastes’ are assigned to individual sectors of the food chain including a description of the respective methods. The method for analyzing, evaluation and optimization of food management systems consist of the following parts: Part I: Terms and Definitions. Part II: System Modeling. Part III: Procedure for Data Collection and Accounting Part. IV: Methodological Approaches for Classification and Evaluation of Results. Part V: Evaluation Parameters and Benchmarks. Part VI: Measures for Optimization. Part VII: Monitoring of Success The method will be demonstrated at the example of an invesigation of food losses and food wastage in the Federal State of Bavaria including an extrapolation of respective results to quantify food wastage in Germany.

Keywords: food losses, food waste, resource management, waste management, system analysis, waste minimization, resource efficiency

Procedia PDF Downloads 406
1553 Effect Of Shading In Evaporatively Cooled Greenhouses In The Mediterranean Region

Authors: Nikolaos Katsoulas, Sofia Faliagka, Athanasios Sapounas

Abstract:

Greenhouse ventilation is an effective way to remove the extra heat from the greenhouse through air exchange between inside and outside when outside air temperature is lower. However, in the Mediterranean areas during summer, most of the day, the outside air temperature reaches values above 25 C; and natural ventilation can not remove the excess heat outside the greenhouse. Shade screens and whitewash are major existing measures used to reduce the greenhouse air temperature during summer by reducing the solar radiation entering the greenhouse. However, the greenhouse air temperature is reduced with a cost in radiation reduction. In addition, due to high air temperature values outside the greenhouse, generally, these systems are not sufficient for extracting the excess energy during sunny summer days and therefore, other cooling methods, such as forced ventilation combined with evaporative cooling, are needed. Evaporative cooling by means of pad and fan or fog systems is a common technique to reduce sensible heat load by increasing the latent heat fraction of dissipated energy. In most of the cases, the greenhouse growers, when all the above systems are available, apply both shading and evaporative cooling. If a movable screen is available, then the screen is usually activated when a certain radiation level is reached. It is not clear whether the shading screens should be used over the growth cycle or only during the most sensitive stages when the crops had a low leaf area and the canopy transpiration rate cannot significantly contribute to the greenhouse cooling. Furthermore, it is not clear which is the optimum radiation level that screen must be activated. This work aims to present the microclimate and cucumber crop physiological response and yield observed in two greenhouse compartments equipped with a pad and fan evaporative cooling system and a thermal/shading screen that is activated at different radiation levels: when the outside solar radiation reaches 700 or 900 W/m2. The greenhouse is located in Velestino, in Central Greece and the measurements are performed during the spring -summer period with the outside air temperature during summer reaching values up to 42C.

Keywords: microclimate, shading, screen, pad and fan, cooling

Procedia PDF Downloads 84
1552 Effect of Vesicular Arbuscular mycorrhiza on Phytoremedial Potential and Physiological Changes in Solanum melongena Plants Grown under Heavy Metal Stress

Authors: Ritu Chaturvedi, Mayank Varun, M. S. Paul

Abstract:

Heavy metal contamination of soil is a growing area of concern since the soil is the matrix that supports flora and impacts humans directly. Phytoremediation of contaminated sites is gaining popularity due to its cost effectiveness and solar driven nature. Some hyperaccumulators have been identified for their potential. Metal-accumulating plants have various mechanisms to cope up with stress and one of them is increasing antioxidative capacity. The aim of this research is to assess the effect of Vesicular arbuscular mycorrhiza (VAM) application on the phytoremedial potential of Solanum melongena (Eggplant) and level of photosynthetic pigments along with antioxidative enzymes. Results showed that VAM application increased shoot length, root proliferation pattern of plants. The level of photosynthetic pigments, proline, SOD, CAT, APX altered significantly in response to heavy metal treatment. In conclusion, VAM increased the uptake of heavy metals which lead to the activation of the defense system in plants for scavenging free radicals.

Keywords: heavy metal, phytoextraction, phytostabilization, reactive oxygen species

Procedia PDF Downloads 276
1551 Increased Reaction and Movement Times When Text Messaging during Simulated Driving

Authors: Adriana M. Duquette, Derek P. Bornath

Abstract:

Reaction Time (RT) and Movement Time (MT) are important components of everyday life that have an effect on the way in which we move about our environment. These measures become even more crucial when an event can be caused (or avoided) in a fraction of a second, such as the RT and MT required while driving. The purpose of this study was to develop a more simple method of testing RT and MT during simulated driving with or without text messaging, in a university-aged population (n = 170). In the control condition, a randomly-delayed red light stimulus flashed on a computer interface after the participant began pressing the ‘gas’ pedal on a foot switch mat. Simple RT was defined as the time between the presentation of the light stimulus and the initiation of lifting the foot from the switch mat ‘gas’ pedal; while MT was defined as the time after the initiation of lifting the foot, to the initiation of depressing the switch mat ‘brake’ pedal. In the texting condition, upon pressing the ‘gas’ pedal, a ‘text message’ appeared on the computer interface in a dialog box that the participant typed on their cell phone while waiting for the light stimulus to turn red. In both conditions, the sequence was repeated 10 times, and an average RT (seconds) and average MT (seconds) were recorded. Condition significantly (p = .000) impacted overall RTs, as the texting condition (0.47 s) took longer than the no-texting (control) condition (0.34 s). Longer MTs were also recorded during the texting condition (0.28 s) than in the control condition (0.23 s), p = .001. Overall increases in Response Time (RT + MT) of 189 ms during the texting condition would equate to an additional 4.2 meters (to react to the stimulus and begin braking) if the participant had been driving an automobile at 80 km per hour. In conclusion, increasing task complexity due to the dual-task demand of text messaging during simulated driving caused significant increases in RT (41%), MT (23%) and Response Time (34%), thus further strengthening the mounting evidence against text messaging while driving.

Keywords: simulated driving, text messaging, reaction time, movement time

Procedia PDF Downloads 525
1550 Microwave Transmission through Metamaterial Based on Permalloy Flakes under Magnetic Resonance and Antiresonance Conditions

Authors: Anatoly B. Rinkevich, Eugeny A. Kuznetsov, Yuri I. Ryabkov

Abstract:

Transmission of electromagnetic waves through a plate of metamaterial based on permalloy flakes and reflection from the plate is investigated. The metamaterial is prepared of permalloy flakes sized from few to 50μ placed into epoxy-amine matrix. Two series of metamaterial samples are under study with the volume portion of permalloy particles 15% and 30%. There is no direct electrical contact between permalloy particles. Microwave measurements have been carried out at frequencies of 12 to 30 GHz in magnetic fields up to 12 kOe. Sharp decrease of transmitted wave is observed under ferromagnetic resonance condition caused by absorption. Under magnetic antiresonance condition, in opposite, maximum of reflection coefficient is observed at frequencies exceeding 30 GHz. For example, for metamaterial sample with the volume portion of permalloy of 30%, the variation of reflection coefficient in magnetic field reaches 300%. These high variations are of interest to develop magnetic field driven microwave devices. Magnetic field variations of refractive index are also estimated.

Keywords: ferromagnetic resonance, magnetic antiresonance, microwave metamaterials, permalloy flakes, transmission and reflection coefficients

Procedia PDF Downloads 140
1549 Elastohydrodynamic Lubrication Study Using Discontinuous Finite Volume Method

Authors: Prawal Sinha, Peeyush Singh, Pravir Dutt

Abstract:

Problems in elastohydrodynamic lubrication have attracted a lot of attention in the last few decades. Solving a two-dimensional problem has always been a big challenge. In this paper, a new discontinuous finite volume method (DVM) for two-dimensional point contact Elastohydrodynamic Lubrication (EHL) problem has been developed and analyzed. A complete algorithm has been presented for solving such a problem. The method presented is robust and easily parallelized in MPI architecture. GMRES technique is implemented to solve the matrix obtained after the formulation. A new approach is followed in which discontinuous piecewise polynomials are used for the trail functions. It is natural to assume that the advantages of using discontinuous functions in finite element methods should also apply to finite volume methods. The nature of the discontinuity of the trail function is such that the elements in the corresponding dual partition have the smallest support as compared with the Classical finite volume methods. Film thickness calculation is done using singular quadrature approach. Results obtained have been presented graphically and discussed. This method is well suited for solving EHL point contact problem and can probably be used as commercial software.

Keywords: elastohydrodynamic, lubrication, discontinuous finite volume method, GMRES technique

Procedia PDF Downloads 258
1548 EnumTree: An Enumerative Biclustering Algorithm for DNA Microarray Data

Authors: Haifa Ben Saber, Mourad Elloumi

Abstract:

In a number of domains, like in DNA microarray data analysis, we need to cluster simultaneously rows (genes) and columns (conditions) of a data matrix to identify groups of constant rows with a group of columns. This kind of clustering is called biclustering. Biclustering algorithms are extensively used in DNA microarray data analysis. More effective biclustering algorithms are highly desirable and needed. We introduce a new algorithm called, Enumerative tree (EnumTree) for biclustering of binary microarray data. is an algorithm adopting the approach of enumerating biclusters. This algorithm extracts all biclusters consistent good quality. The main idea of ​​EnumLat is the construction of a new tree structure to represent adequately different biclusters discovered during the process of enumeration. This algorithm adopts the strategy of all biclusters at a time. The performance of the proposed algorithm is assessed using both synthetic and real DNA micryarray data, our algorithm outperforms other biclustering algorithms for binary microarray data. Biclusters with different numbers of rows. Moreover, we test the biological significance using a gene annotation web tool to show that our proposed method is able to produce biologically relevent biclusters.

Keywords: DNA microarray, biclustering, gene expression data, tree, datamining.

Procedia PDF Downloads 372
1547 Numerical Study for Compressive Strength of Basalt Composite Sandwich Infill Panel

Authors: Viriyavudh Sim, Jung Kyu Choi, Yong Ju Kwak, Oh Hyeon Jeon, Woo Young Jung

Abstract:

In this study, we investigated the buckling performance of basalt fiber reinforced polymer (BFRP) sandwich infill panels. Fiber Reinforced Polymer (FRP) is a major evolution for energy dissipation when used as infill material of frame structure, a basic Polymer Matrix Composite (PMC) infill wall system consists of two FRP laminates surrounding an infill of foam core. Furthermore, this type of component is for retrofitting and strengthening frame structure to withstand the seismic disaster. In-plane compression was considered in the numerical analysis with ABAQUS platform to determine the buckling failure load of BFRP infill panel system. The present result shows that the sandwich BFRP infill panel system has higher resistance to buckling failure than those of glass fiber reinforced polymer (GFRP) infill panel system, i.e. 16% increase in buckling resistance capacity.

Keywords: Basalt Fiber Reinforced Polymer (BFRP), buckling performance, FEM analysis, sandwich infill panel

Procedia PDF Downloads 441
1546 Communication Infrastructure Required for a Driver Behaviour Monitoring System, ‘SiaMOTO’ IT Platform

Authors: Dogaru-Ulieru Valentin, Sălișteanu Ioan Corneliu, Ardeleanu Mihăiță Nicolae, Broscăreanu Ștefan, Sălișteanu Bogdan, Mihai Mihail

Abstract:

The SiaMOTO system is a communications and data processing platform for vehicle traffic. The human factor is the most important factor in the generation of this data, as the driver is the one who dictates the trajectory of the vehicle. Like any trajectory, specific parameters refer to position, speed and acceleration. Constant knowledge of these parameters allows complex analyses. Roadways allow many vehicles to travel through their confined space, and the overlapping trajectories of several vehicles increase the likelihood of collision events, known as road accidents. Any such event has causes that lead to its occurrence, so the conditions for its occurrence are known. The human factor is predominant in deciding the trajectory parameters of the vehicle on the road, so monitoring it by knowing the events reported by the DiaMOTO device over time, will generate a guide to target any potentially high-risk driving behavior and reward those who control the driving phenomenon well. In this paper, we have focused on detailing the communication infrastructure of the DiaMOTO device with the traffic data collection server, the infrastructure through which the database that will be used for complex AI/DLM analysis is built. The central element of this description is the data string in CODEC-8 format sent by the DiaMOTO device to the SiaMOTO collection server database. The data presented are specific to a functional infrastructure implemented in an experimental model stage, by installing on a number of 50 vehicles DiaMOTO unique code devices, integrating ADAS and GPS functions, through which vehicle trajectories can be monitored 24 hours a day.

Keywords: DiaMOTO, Codec-8, ADAS, GPS, driver monitoring

Procedia PDF Downloads 80
1545 Local Radial Basis Functions for Helmholtz Equation in Seismic Inversion

Authors: Hebert Montegranario, Mauricio Londoño

Abstract:

Solutions of Helmholtz equation are essential in seismic imaging methods like full wave inversion, which needs to solve many times the wave equation. Traditional methods like Finite Element Method (FEM) or Finite Differences (FD) have sparse matrices but may suffer the so called pollution effect in the numerical solutions of Helmholtz equation for large values of the wave number. On the other side, global radial basis functions have a better accuracy but produce full matrices that become unstable. In this research we combine the virtues of both approaches to find numerical solutions of Helmholtz equation, by applying a meshless method that produce sparse matrices by local radial basis functions. We solve the equation with absorbing boundary conditions of the kind Clayton-Enquist and PML (Perfect Matched Layers) and compared with results in standard literature, showing a promising performance by tackling both the pollution effect and matrix instability.

Keywords: Helmholtz equation, meshless methods, seismic imaging, wavefield inversion

Procedia PDF Downloads 548
1544 Right Ventricular Dynamics During Breast Cancer Chemotherapy in Low Cardiovascular Risk Patients

Authors: Nana Gorgiladze, Tamar Gaprindashvili, Mikheil Shavdia, Zurab Pagava

Abstract:

Introduction/Purpose Chemotherapy is a common treatment for breast cancer, but it can also cause damage to the heart and blood vessels. This damage, known as cancer therapy-related cardiovascular toxicity (CTR-CVT), can increase the risk of heart failure and death in breast cancer patients. The left ventricle is often affected by CTR-CVT, but the right ventricle (RV) may also be vulnerable to CTR-CVT and may show signs of dysfunction before the left ventricle. The study aims to investigate how the RV function changes during chemotherapy for breast cancer by using conventional echocardiographic and global longitudinal strain (GLS) techniques. By measuring the GLS strain of the RV, researchers tend to detect early signs of CTR-CVT and improve the management of breast cancer patients. Methods The study was conducted on 28 women with low cardiovascular risk who received anthracycline chemotherapy for breast cancer. Conventional 2D echocardiography (LVEF, RVS’, TAPSE) and speckle-tracking echocardiography (STE) measurements of the left and right ventricles (LVGLS, RVGLS) were used to assess cardiac function before and after chemotherapy. All patients had normal LVEF at the beginning of the study. Cardiotoxicity was defined as a new LVEF reduction of 10 percentage points to an LVEF of 40-49% and/or a new decline in GLS of 15% from baseline, as proposed by the most recent cardio-oncology guideline. ResultsThe research found that the LVGLS decreased from -21.2%2.1% to -18.6%2.6% (t-test = -4.116; df = 54, p=0.001). The change in value LV-GLS was 2.6%3.0%. The mean percentage change of the LVGLS was 11,6%13,3%; p=0.001. Similarly, the right ventricular global longitudinal strain (RVGLS) decreased from -25.2%2.9% to -21.4%4.4% (t-test = -3.82; df = 54, p=0.001). The RV-GLS value of change was 3.8%3.6%. Likewise, the percentage decrease of the RVGLS was 15,0%14,3%, p=0.001.However, the measurements of the right ventricular systolic function (RVS) and tricuspid annular plane systolic excursion (TAPSE) were insignificant, and the left ventricular ejection fraction ( LVEF) remained unchanged.

Keywords: cardiotoxicity, chemotherapy, GLS, right ventricle

Procedia PDF Downloads 72
1543 Thermal Resistance Analysis of Flexible Composites Based on Al2O3 Aerogels

Authors: Jianzheng Wei, Duo Zhen, Zhihan Yang, Huifeng Tan

Abstract:

The deployable descent technology is a lightweight entry method using an inflatable heat shield. The heatshield consists of a pressurized core which is covered by different layers of thermal insulation and flexible ablative materials in order to protect against the thermal loads. In this paper, both aluminum and silicon-aluminum aerogels were prepared by freeze-drying method. The latter material has bigger specific surface area and nano-scale pores. Mullite fibers are used as the reinforcing fibers to prepare the aerogel matrix to improve composite flexibility. The flexible composite materials were performed as an insulation layer to an underlying aramid fabric by a thermal shock test at a heat flux density of 120 kW/m2 and uniaxial tensile test. These results show that the aramid fabric with untreated mullite fibers as the thermal protective layer is completely carbonized at the heat of about 60 s. The aramid fabric as a thermal resistance layer of the composite material still has good mechanical properties at the same heat condition.

Keywords: aerogel, aramid fabric, flexibility, thermal resistance

Procedia PDF Downloads 153
1542 Additive Weibull Model Using Warranty Claim and Finite Element Analysis Fatigue Analysis

Authors: Kanchan Mondal, Dasharath Koulage, Dattatray Manerikar, Asmita Ghate

Abstract:

This paper presents an additive reliability model using warranty data and Finite Element Analysis (FEA) data. Warranty data for any product gives insight to its underlying issues. This is often used by Reliability Engineers to build prediction model to forecast failure rate of parts. But there is one major limitation in using warranty data for prediction. Warranty periods constitute only a small fraction of total lifetime of a product, most of the time it covers only the infant mortality and useful life zone of a bathtub curve. Predicting with warranty data alone in these cases is not generally provide results with desired accuracy. Failure rate of a mechanical part is driven by random issues initially and wear-out or usage related issues at later stages of the lifetime. For better predictability of failure rate, one need to explore the failure rate behavior at wear out zone of a bathtub curve. Due to cost and time constraints, it is not always possible to test samples till failure, but FEA-Fatigue analysis can provide the failure rate behavior of a part much beyond warranty period in a quicker time and at lesser cost. In this work, the authors proposed an Additive Weibull Model, which make use of both warranty and FEA fatigue analysis data for predicting failure rates. It involves modeling of two data sets of a part, one with existing warranty claims and other with fatigue life data. Hazard rate base Weibull estimation has been used for the modeling the warranty data whereas S-N curved based Weibull parameter estimation is used for FEA data. Two separate Weibull models’ parameters are estimated and combined to form the proposed Additive Weibull Model for prediction.

Keywords: bathtub curve, fatigue, FEA, reliability, warranty, Weibull

Procedia PDF Downloads 73
1541 High Performance Fibre Reinforced Alkali Activated Slag Concrete

Authors: A. Sivakumar, K. Srinivasan

Abstract:

The main objective of the study is focused in producing slag based geopolymer concrete obtained with the addition of alkali activator. Test results indicated that the reaction of silicates in slag is based on the reaction potential of sodium hydroxide and the formation of alumino-silicates. The study also comprises on the evaluation of the efficiency of polymer reaction in terms of the strength gain properties for different geopolymer mixtures. Geopolymer mixture proportions were designed for different binder to total aggregate ratio (0.3 & 0.45) and fine to coarse aggregate ratio (0.4 & 0.8). Geopolymer concrete specimens casted with normal curing conditions reported a maximum 28 days compressive strength of 54.75 MPa. The addition of glued steel fibres at 1.0% Vf in geopolymer concrete showed reasonable improvements on the compressive strength, split tensile strength and flexural properties of different geopolymer mixtures. Further, comparative assessment was made for different geopolymer mixtures and the reinforcing effects of steel fibres were investigated in different concrete matrix.

Keywords: accelerators, alkali activators, geopolymer, hot air oven curing, polypropylene fibres, slag, steam curing, steel fibres

Procedia PDF Downloads 273
1540 Intrusion Detection System Using Linear Discriminant Analysis

Authors: Zyad Elkhadir, Khalid Chougdali, Mohammed Benattou

Abstract:

Most of the existing intrusion detection systems works on quantitative network traffic data with many irrelevant and redundant features, which makes detection process more time’s consuming and inaccurate. A several feature extraction methods, such as linear discriminant analysis (LDA), have been proposed. However, LDA suffers from the small sample size (SSS) problem which occurs when the number of the training samples is small compared with the samples dimension. Hence, classical LDA cannot be applied directly for high dimensional data such as network traffic data. In this paper, we propose two solutions to solve SSS problem for LDA and apply them to a network IDS. The first method, reduce the original dimension data using principal component analysis (PCA) and then apply LDA. In the second solution, we propose to use the pseudo inverse to avoid singularity of within-class scatter matrix due to SSS problem. After that, the KNN algorithm is used for classification process. We have chosen two known datasets KDDcup99 and NSLKDD for testing the proposed approaches. Results showed that the classification accuracy of (PCA+LDA) method outperforms clearly the pseudo inverse LDA method when we have large training data.

Keywords: LDA, Pseudoinverse, PCA, IDS, NSL-KDD, KDDcup99

Procedia PDF Downloads 228
1539 Low Overhead Dynamic Channel Selection with Cluster-Based Spatial-Temporal Station Reporting in Wireless Networks

Authors: Zeyad Abdelmageid, Xianbin Wang

Abstract:

Choosing the operational channel for a WLAN access point (AP) in WLAN networks has been a static channel assignment process initiated by the user during the deployment process of the AP, which fails to cope with the dynamic conditions of the assigned channel at the station side afterward. However, the dramatically growing number of Wi-Fi APs and stations operating in the unlicensed band has led to dynamic, distributed, and often severe interference. This highlights the urgent need for the AP to dynamically select the best overall channel of operation for the basic service set (BSS) by considering the distributed and changing channel conditions at all stations. Consequently, dynamic channel selection algorithms which consider feedback from the station side have been developed. Despite the significant performance improvement, existing channel selection algorithms suffer from very high feedback overhead. Feedback latency from the STAs, due to the high overhead, can cause the eventually selected channel to no longer be optimal for operation due to the dynamic sharing nature of the unlicensed band. This has inspired us to develop our own dynamic channel selection algorithm with reduced overhead through the proposed low-overhead, cluster-based station reporting mechanism. The main idea behind the cluster-based station reporting is the observation that STAs which are very close to each other tend to have very similar channel conditions. Instead of requesting each STA to report on every candidate channel while causing high overhead, the AP divides STAs into clusters then assigns each STA in each cluster one channel to report feedback on. With the proper design of the cluster based reporting, the AP does not lose any information about the channel conditions at the station side while reducing feedback overhead. The simulation results show equal performance and, at times, better performance with a fraction of the overhead. We believe that this algorithm has great potential in designing future dynamic channel selection algorithms with low overhead.

Keywords: channel assignment, Wi-Fi networks, clustering, DBSCAN, overhead

Procedia PDF Downloads 121
1538 Remediation Activities in Bagnoli Superfund Site: An Italian Case of Study

Authors: S. Bellagamba, S. Malinconico, P. De Simone, F. Paglietti

Abstract:

Until the 1990s, Italy was among the world’s leading producers of raw asbestos fibres and Asbestos Containing Materials (ACM) and one of the most contaminated Countries in Europe. To reduce asbestos-related health effects, Italy has adopted many laws and regulations regarding exposure thresholds, limits, and remediation tools. The Italian Environmental Ministry (MASE) has identified 42 Italian Superfund sites, 11 of which are mainly contaminated by Asbestos. The highest levels of exposure occur during remediation activities in the 42 superfund-sites and during the management of asbestos containing waste in landfills, which requires specific procedures. INAIL-DIT play a role as MASE scientific consultant on issues concerning pollution, remediation, and Asbestos Containing Waste (ACW) management. The aim is to identify the best Emergency Safety Measures, to suggest specific best pratics for remediation through occupational on site monitorings and laboratory analysis. Moreover, the aim of INAIL research is testing the available technologies for working activities and analytical methodologies. This paper describes the remediation of Bagnoli industrial facility (Naples), an Eternit factory which produced asbestos cement products. The remediation has been analyzed, considering a first phase focused on the demolition of structures and plants and a second phase regarding the characterization, screening, removal, and disposal of polluted soils. The project planned the complete removal of all the asbestos dispersed in the soil and subsoil and the recovery of the clean fraction. This work highlights the remediation techniques used and the prevention measures provide for workers and daily life areas protection. This study, considering the high number of asbestos cement factories in the world, can to serve as an important reference for similar situation at European or international scale.

Keywords: safety, asbestos, workers, contaminated sites, hazardous waste

Procedia PDF Downloads 88
1537 Water Re-Use Optimization in a Sugar Platform Biorefinery Using Municipal Solid Waste

Authors: Leo Paul Vaurs, Sonia Heaven, Charles Banks

Abstract:

Municipal solid waste (MSW) is a virtually unlimited source of lignocellulosic material in the form of a waste paper/cardboard mixture which can be converted into fermentable sugars via cellulolytic enzyme hydrolysis in a biorefinery. The extraction of the lignocellulosic fraction and its preparation, however, are energy and water demanding processes. The waste water generated is a rich organic liquor with a high Chemical Oxygen Demand that can be partially cleaned while generating biogas in an Upflow Anaerobic Sludge Blanket bioreactor and be further re-used in the process. In this work, an experiment was designed to determine the critical contaminant concentrations in water affecting either anaerobic digestion or enzymatic hydrolysis by simulating multiple water re-circulations. It was found that re-using more than 16.5 times the same water could decrease the hydrolysis yield by up to 65 % and led to a complete granules desegregation. Due to the complexity of the water stream, the contaminant(s) responsible for the performance decrease could not be identified but it was suspected to be caused by sodium, potassium, lipid accumulation for the anaerobic digestion (AD) process and heavy metal build-up for enzymatic hydrolysis. The experimental data were incorporated into a Water Pinch technology based model that was used to optimize the water re-utilization in the modelled system to reduce fresh water requirement and wastewater generation while ensuring all processes performed at optimal level. Multiple scenarios were modelled in which sub-process requirements were evaluated in term of importance, operational costs and impact on the CAPEX. The best compromise between water usage, AD and enzymatic hydrolysis yield was determined for each assumed contaminant degradations by anaerobic granules. Results from the model will be used to build the first MSW based biorefinery in the USA.

Keywords: anaerobic digestion, enzymatic hydrolysis, municipal solid waste, water optimization

Procedia PDF Downloads 321
1536 Study of Laminar Convective Heat Transfer, Friction Factor, and Pumping Power Advantage of Aluminum Oxide-Water Nanofluid through a Channel

Authors: M. Insiat Islam Rabby, M. Mahbubur Rahman, Eshanul Islam, A. K. M. Sadrul Islam

Abstract:

The numerical and simulative analysis of laminar heat exchange convection of aluminum oxide (Al₂O₃) - water nanofluid for the developed region through two parallel plates is presented in this present work. The second order single phase energy equation, mass and momentum equation are solved by using finite volume method with the ANSYS FLUENT 16 software. The distance between two parallel plates is 4 mm and length is 600 mm. Aluminum oxide (Al₂O₃) is used as nanoparticle and water is used as the base/working fluid for the investigation. At the time of simulation 1% to 5% volume concentrations of the Al₂O₃ nanoparticles are used for mixing with water to produce nanofluid and a wide range of interval of Reynolds number from 500 to 1100 at constant heat flux 500 W/m² at the channel wall has also been introduced. The result reveals that for increasing the Reynolds number the Nusselt number and heat transfer coefficient are increased linearly and friction factor decreased linearly in the developed region for both water and Al₂O₃-H₂O nanofluid. By increasing the volume fraction of Al₂O₃-H₂O nanofluid from 1% to 5% the value of Nusselt number increased rapidly from 0.7 to 7.32%, heat transfer coefficient increased 7.14% to 31.5% and friction factor increased very little from 0.1% to 4% for constant Reynolds number compared to pure water. At constant heat transfer coefficient 700 W/m2-K the pumping power advantages have been achieved 20% for 1% volume concentration and 62% for 3% volume concentration of nanofluid compared to pure water.

Keywords: convective heat transfer, pumping power, constant heat flux, nanofluid, nanoparticles, volume concentration, thermal conductivity

Procedia PDF Downloads 160
1535 Trinary Affinity—Mathematic Verification and Application (1): Construction of Formulas for the Composite and Prime Numbers

Authors: Liang Ming Zhong, Yu Zhong, Wen Zhong, Fei Fei Yin

Abstract:

Trinary affinity is a description of existence: every object exists as it is known and spoken of, in a system of 2 differences (denoted dif1, dif₂) and 1 similarity (Sim), equivalently expressed as dif₁ / Sim / dif₂ and kn / 0 / tkn (kn = the known, tkn = the 'to be known', 0 = the zero point of knowing). They are mathematically verified and illustrated in this paper by the arrangement of all integers onto 3 columns, where each number exists as a difference in relation to another number as another difference, and the 2 difs as arbitrated by a third number as the Sim, resulting in a trinary affinity or trinity of 3 numbers, of which one is the known, the other the 'to be known', and the third the zero (0) from which both the kn and tkn are measured and specified. Consequently, any number is horizontally specified either as 3n, or as '3n – 1' or '3n + 1', and vertically as 'Cn + c', so that any number seems to occur at the intersection of its X and Y axes and represented by its X and Y coordinates, as any point on Earth’s surface by its latitude and longitude. Technically, i) primes are viewed and treated as progenitors, and composites as descending from them, forming families of composites, each capable of being measured and specified from its own zero called in this paper the realistic zero (denoted 0r, as contrasted to the mathematic zero, 0m), which corresponds to the constant c, and the nature of which separates the composite and prime numbers, and ii) any number is considered as having a magnitude as well as a position, so that a number is verified as a prime first by referring to its descriptive formula and then by making sure that no composite number can possibly occur on its position, by dividing it with factors provided by the composite number formulas. The paper consists of 3 parts: 1) a brief explanation of the trinary affinity of things, 2) the 8 formulas that represent ALL the primes, and 3) families of composite numbers, each represented by a formula. A composite number family is described as 3n + f₁‧f₂. Since there are an infinitely large number of composite number families, to verify the primality of a great probable prime, we have to have it divided with several or many a f₁ from a range of composite number formulas, a procedure that is as laborious as it is the surest way to verifying a great number’s primality. (So, it is possible to substitute planned division for trial division.)

Keywords: trinary affinity, difference, similarity, realistic zero

Procedia PDF Downloads 212
1534 Application of Customized Bioaugmentation Inocula to Alleviate Ammonia Toxicity in CSTR Anaerobic Digesters

Authors: Yixin Yan, Miao Yan, Irini Angelidaki, Ioannis Fotidis

Abstract:

Ammonia, which derives from the degradation of urea and protein-substrates, is the major toxicant of the commercial anaerobic digestion reactors causing loses of up to 1/3 of their practical biogas production, which reflects directly on the overall revenue of the plants. The current experimental work is aiming to alleviate the ammonia inhibition in anaerobic digestion (AD) process by developing an innovative bioaugmentation method of ammonia tolerant methanogenic consortia. The ammonia tolerant consortia were cultured in batch reactors and immobilized together with biochar in agar (customized inocula). Three continuous stirred-tank reactors (CSTR), fed with the organic fraction of municipal solid waste at a hydraulic retention time of 15 days and operated at thermophilic (55°C) conditions were assessed. After an ammonia shock of 4 g NH4+-N L-1, the customized inocula were bioaugmented into the CSTR reactors to alleviate ammonia toxicity effect on AD process. Recovery rate of methane production and methanogenic activity will be assessed to evaluate the bioaugmentation performance, while 16s rRNA gene sequence will be used to reveal the difference of microbial community changes through bioaugmentation. At the microbial level, the microbial community structures of the four reactors will be analysed to find the mechanism of bioaugmentation. Changes in hydrogen formation potential will be used to predict direct interspecies electron transfer (DIET) between ammonia tolerant methanogens and syntrophic bacteria. This experimental work is expected to create bioaugmentation inocula that will be easy to obtain, transport, handled and bioaugment in AD reactors to efficiently alleviate the ammonia toxicity, without alternating any of the other operational parameters including the ammonia-rich feedstocks.

Keywords: artisanal fishing waste, acidogenesis, volatile fatty acids, pH, inoculum/substrate ratio

Procedia PDF Downloads 128