Search results for: hardware testing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3453

Search results for: hardware testing

2613 MapReduce Logistic Regression Algorithms with RHadoop

Authors: Byung Ho Jung, Dong Hoon Lim

Abstract:

Logistic regression is a statistical method for analyzing a dataset in which there are one or more independent variables that determine an outcome. Logistic regression is used extensively in numerous disciplines, including the medical and social science fields. In this paper, we address the problem of estimating parameters in the logistic regression based on MapReduce framework with RHadoop that integrates R and Hadoop environment applicable to large scale data. There exist three learning algorithms for logistic regression, namely Gradient descent method, Cost minimization method and Newton-Rhapson's method. The Newton-Rhapson's method does not require a learning rate, while gradient descent and cost minimization methods need to manually pick a learning rate. The experimental results demonstrated that our learning algorithms using RHadoop can scale well and efficiently process large data sets on commodity hardware. We also compared the performance of our Newton-Rhapson's method with gradient descent and cost minimization methods. The results showed that our newton's method appeared to be the most robust to all data tested.

Keywords: big data, logistic regression, MapReduce, RHadoop

Procedia PDF Downloads 261
2612 Application of a Model-Free Artificial Neural Networks Approach for Structural Health Monitoring of the Old Lidingö Bridge

Authors: Ana Neves, John Leander, Ignacio Gonzalez, Raid Karoumi

Abstract:

Systematic monitoring and inspection are needed to assess the present state of a structure and predict its future condition. If an irregularity is noticed, repair actions may take place and the adequate intervention will most probably reduce the future costs with maintenance, minimize downtime and increase safety by avoiding the failure of the structure as a whole or of one of its structural parts. For this to be possible decisions must be made at the right time, which implies using systems that can detect abnormalities in their early stage. In this sense, Structural Health Monitoring (SHM) is seen as an effective tool for improving the safety and reliability of infrastructures. This paper explores the decision-making problem in SHM regarding the maintenance of civil engineering structures. The aim is to assess the present condition of a bridge based exclusively on measurements using the suggested method in this paper, such that action is taken coherently with the information made available by the monitoring system. Artificial Neural Networks are trained and their ability to predict structural behavior is evaluated in the light of a case study where acceleration measurements are acquired from a bridge located in Stockholm, Sweden. This relatively old bridge is presently still in operation despite experiencing obvious problems already reported in previous inspections. The prediction errors provide a measure of the accuracy of the algorithm and are subjected to further investigation, which comprises concepts like clustering analysis and statistical hypothesis testing. These enable to interpret the obtained prediction errors, draw conclusions about the state of the structure and thus support decision making regarding its maintenance.

Keywords: artificial neural networks, clustering analysis, model-free damage detection, statistical hypothesis testing, structural health monitoring

Procedia PDF Downloads 194
2611 Static Application Security Testing Approach for Non-Standard Smart Contracts

Authors: Antonio Horta, Renato Marinho, Raimir Holanda

Abstract:

Considered as an evolution of the Blockchain, the Ethereum platform, besides allowing transactions of its cryptocurrency named Ether, it allows the programming of decentralised applications (DApps) and smart contracts. However, this functionality into blockchains has raised other types of threats, and the exploitation of smart contracts vulnerabilities has taken companies to experience big losses. This research intends to figure out the number of contracts that are under risk of being drained. Through a deep investigation, more than two hundred thousand smart contracts currently available in the Ethereum platform were scanned and estimated how much money is at risk. The experiment was based in a query run on Google Big Query in July 2022 and returned 50,707,133 contracts published on the Ethereum platform. After applying the filtering criteria, the experimentgot 430,584 smart contracts to download and analyse. The filtering criteria consisted of filtering out: ERC20 and ERC721 contracts, contracts without transactions, and contracts without balance. From this amount of 430,584 smart contracts selected, only 268,103 had source codes published on Etherscan, however, we discovered, using a hashing process, that there were contracts duplication. Removing the duplicated contracts, the process ended up with 20,417 source codes, which were analysed using the open source SAST tool smartbugswith oyente and securify algorithms. In the end, there was nearly $100,000 at risk of being drained from the potentially vulnerable smart contracts. It is important to note that the tools used in this study may generate false positives, which may interfere with the number of vulnerable contracts. To address this point, our next step in this research is to develop an application to test the contract in a parallel environment to verify the vulnerability. Finally, this study aims to alert users and companies about the risk on not properly creating and analysing their smart contracts before publishing them into the platform. As any other application, smart contracts are at risk of having vulnerabilities which, in this case, may result in direct financial losses.

Keywords: blockchain, reentrancy, static application security testing, smart contracts

Procedia PDF Downloads 78
2610 Dynamic Test for Stability of Columns in Sway Mode

Authors: Elia Efraim, Boris Blostotsky

Abstract:

Testing of columns in sway mode is performed in order to determine the maximal allowable load limited by plastic deformations or their end connections and a critical load limited by columns stability. Motivation to determine accurate value of critical force is caused by its using as follow: - critical load is maximal allowable load for given column configuration and can be used as criterion of perfection; - it is used in calculation prescribed by standards for design of structural elements under combined action of compression and bending; - it is used for verification of theoretical analysis of stability at various end conditions of columns. In the present work a new non-destructive method for determination of columns critical buckling load in sway mode is proposed. The method allows performing measurements during the tests under loads that exceeds the columns critical load without losing its stability. The possibility of such loading is achieved by structure of the loading system. The system is performed as frame with rigid girder, one of the columns is the tested column and the other is additional two-hinged strut. Loading of the frame is carried out by the flexible traction element attached to the girder. The load applied on the tested column can achieve values that exceed the critical load by choice of parameters of the traction element and the additional strut. The system lateral stiffness and the column critical load are obtained by the dynamic method. The experiment planning and the comparison between the experimental and theoretical values were performed based on the developed dependency of lateral stiffness of the system on vertical load, taking into account semi-rigid connections of the column's ends. The agreement between the obtained results was established. The method can be used for testing of real full-size columns in industrial conditions.

Keywords: buckling, columns, dynamic method, end-fixity factor, sway mode

Procedia PDF Downloads 343
2609 Evaluation of the Improve Vacuum Blood Collection Tube for Laboratory Tests

Authors: Yoon Kyung Song, Seung Won Han, Sang Hyun Hwang, Do Hoon Lee

Abstract:

Laboratory tests is a significant part for the diagnosis, prognosis, treatment of diseases. Blood collection is a simple process, but can be a potential cause of pre-analytical errors. Vacuum blood collection tubes used to collect and store the blood specimens is necessary for accurate test results. The purpose of this study was to validate Improve serum separator tube(SST) (Guanzhou Improve Medical Instruments Co., Ltd, China) for routine clinical chemistry laboratory testing. Blood specimens were collected from 100 volunteers in three different serum vacuum tubes (Greiner SST , Becton Dickinson SST , Improve SST). The specimens were evaluated for 16 routine chemistry tests using TBA-200FR NEO (Toshiba Medical Co. JAPAN). The results were statistically analyzed by paired t-test and Bland-Altman plot. For stability test, the initial results for each tube were compared with results of 72 hours preserved specimens. Their clinical availability was evaluated by biological Variation of Ricos data bank. Paired t-test analysis revealed that AST, ALT, K, Cl showed statistically same results but calcium (CA), phosphorus(PHOS), glucose(GLU), BUN, uric acid(UA), cholesterol(CHOL), total protein(TP), albumin(ALB), total bilirubin(TB), ALP, creatinine(CRE), sodium(NA) were different(P < 0.05) between Improve SST and Greiner SST. Also, CA, PHOS, TP, TB, AST, ALT, NA, K, Cl showed statistically the same results but GLU, BUN, UA, CHOL, ALB, ALP, CRE were different between Improve SST and Becton Dickinson SST. All statistically different cases were clinically acceptable by biological Variation of Ricos data bank. Improve SST tubes showed satisfactory results compared with Greiner SST and Becton Dickinson SST. We concluded that the tubes are acceptable for routine clinical chemistry laboratory testing.

Keywords: blood collection, Guanzhou Improve, SST, vacuum tube

Procedia PDF Downloads 229
2608 Fe Modified Tin Oxide Thin Film Based Matrix for Reagentless Uric Acid Biosensing

Authors: Kashima Arora, Monika Tomar, Vinay Gupta

Abstract:

Biosensors have found potential applications ranging from environmental testing and biowarfare agent detection to clinical testing, health care, and cell analysis. This is driven in part by the desire to decrease the cost of health care and to obtain precise information more quickly about the health status of patient by the development of various biosensors, which has become increasingly prevalent in clinical testing and point of care testing for a wide range of biological elements. Uric acid is an important byproduct in human body and a number of pathological disorders are related to its high concentration in human body. In past few years, rapid growth in the development of new materials and improvements in sensing techniques have led to the evolution of advanced biosensors. In this context, metal oxide thin film based matrices due to their bio compatible nature, strong adsorption ability, high isoelectric point (IEP) and abundance in nature have become the materials of choice for recent technological advances in biotechnology. In the past few years, wide band-gap metal oxide semiconductors including ZnO, SnO₂ and CeO₂ have gained much attention as a matrix for immobilization of various biomolecules. Tin oxide (SnO₂), wide band gap semiconductor (Eg =3.87 eV), despite having multifunctional properties for broad range of applications including transparent electronics, gas sensors, acoustic devices, UV photodetectors, etc., it has not been explored much for biosensing purpose. To realize a high performance miniaturized biomolecular electronic device, rf sputtering technique is considered to be the most promising for the reproducible growth of good quality thin films, controlled surface morphology and desired film crystallization with improved electron transfer property. Recently, iron oxide and its composites have been widely used as matrix for biosensing application which exploits the electron communication feature of Fe, for the detection of various analytes using urea, hemoglobin, glucose, phenol, L-lactate, H₂O₂, etc. However, to the authors’ knowledge, no work is being reported on modifying the electronic properties of SnO₂ by implanting with suitable metal (Fe) to induce the redox couple in it and utilizing it for reagentless detection of uric acid. In present study, Fe implanted SnO₂ based matrix has been utilized for reagentless uric acid biosensor. Implantation of Fe into SnO₂ matrix is confirmed by energy-dispersive X-Ray spectroscopy (EDX) analysis. Electrochemical techniques have been used to study the response characteristics of Fe modified SnO₂ matrix before and after uricase immobilization. The developed uric acid biosensor exhibits a high sensitivity to about 0.21 mA/mM and a linear variation in current response over concentration range from 0.05 to 1.0 mM of uric acid besides high shelf life (~20 weeks). The Michaelis-Menten kinetic parameter (Km) is found to be relatively very low (0.23 mM), which indicates high affinity of the fabricated bioelectrode towards uric acid (analyte). Also, the presence of other interferents present in human serum has negligible effect on the performance of biosensor. Hence, obtained results highlight the importance of implanted Fe:SnO₂ thin film as an attractive matrix for realization of reagentless biosensors towards uric acid.

Keywords: Fe implanted tin oxide, reagentless uric acid biosensor, rf sputtering, thin film

Procedia PDF Downloads 171
2607 An Efficient Encryption Scheme Using DWT and Arnold Transforms

Authors: Ali Abdrhman M. Ukasha

Abstract:

Data security needed in data transmission, storage, and communication to ensure the security. The color image is decomposed into red, green, and blue channels. The blue and green channels are compressed using 3-levels discrete wavelet transform. The Arnold transform uses to changes the locations of red image channel pixels as image scrambling process. Then all these channels are encrypted separately using a key image that has same original size and is generating using private keys and modulo operations. Performing the X-OR and modulo operations between the encrypted channels images for image pixel values change purpose. The extracted contours of color image recovery can be obtained with accepted level of distortion using Canny edge detector. Experiments have demonstrated that proposed algorithm can fully encrypt 2D color image and completely reconstructed without any distortion. It has shown that the color image can be protected with a higher security level. The presented method has easy hardware implementation and suitable for multimedia protection in real time applications such as wireless networks and mobile phone services.

Keywords: color image, wavelet transform, edge detector, Arnold transform, lossy image encryption

Procedia PDF Downloads 470
2606 Testing for Endogeneity of Foreign Direct Investment: Implications for Economic Policy

Authors: Liwiusz Wojciechowski

Abstract:

Research background: The current knowledge does not give a clear answer to the question of the impact of FDI on productivity. Results of the empirical studies are still inconclusive, no matter how extensive and diverse in terms of research approaches or groups of countries analyzed they are. It should also take into account the possibility that FDI and productivity are linked and that there is a bidirectional relationship between them. This issue is particularly important because on one hand FDI can contribute to changes in productivity in the host country, but on the other hand its level and dynamics may imply that FDI should be undertaken in a given country. As already mentioned, a two-way relationship between the presence of foreign capital and productivity in the host country should be assumed, taking into consideration the endogenous nature of FDI. Purpose of the article: The overall objective of this study is to determine the causality between foreign direct investment and total factor productivity in host county in terms of different relative absorptive capacity across countries. In the classic sense causality among variables is not always obvious and requires for testing, which would facilitate proper specification of FDI models. The aim of this article is to study endogeneity of selected macroeconomic variables commonly being used in FDI models in case of Visegrad countries: main recipients of FDI in CEE. The findings may be helpful in determining the structure of the actual relationship between variables, in appropriate models estimation and in forecasting as well as economic policymaking. Methodology/methods: Panel and time-series data techniques including GMM estimator, VEC models and causality tests were utilized in this study. Findings & Value added: The obtained results allow to confirm the hypothesis states the bi-directional causality between FDI and total factor productivity. Although results differ from among countries and data level of aggregation implications may be useful for policymakers in case of providing foreign capital attracting policy.

Keywords: endogeneity, foreign direct investment, multi-equation models, total factor productivity

Procedia PDF Downloads 191
2605 [Keynote] Implementation of Quality Control Procedures in Radiotherapy CT Simulator

Authors: B. Petrović, L. Rutonjski, M. Baucal, M. Teodorović, O. Čudić, B. Basarić

Abstract:

Purpose/Objective: Radiotherapy treatment planning requires use of CT simulator, in order to acquire CT images. The overall performance of CT simulator determines the quality of radiotherapy treatment plan, and at the end, the outcome of treatment for every single patient. Therefore, it is strongly advised by international recommendations, to set up a quality control procedures for every machine involved in radiotherapy treatment planning process, including the CT scanner/ simulator. The overall process requires number of tests, which are used on daily, weekly, monthly or yearly basis, depending on the feature tested. Materials/Methods: Two phantoms were used: a dedicated phantom CIRS 062QA, and a QA phantom obtained with the CT simulator. The examined CT simulator was Siemens Somatom Definition as Open, dedicated for radiation therapy treatment planning. The CT simulator has a built in software, which enables fast and simple evaluation of CT QA parameters, using the phantom provided with the CT simulator. On the other hand, recommendations contain additional test, which were done with the CIRS phantom. Also, legislation on ionizing radiation protection requires CT testing in defined periods of time. Taking into account the requirements of law, built in tests of a CT simulator, and international recommendations, the intitutional QC programme for CT imulator is defined, and implemented. Results: The CT simulator parameters evaluated through the study were following: CT number accuracy, field uniformity, complete CT to ED conversion curve, spatial and contrast resolution, image noise, slice thickness, and patient table stability.The following limits are established and implemented: CT number accuracy limits are +/- 5 HU of the value at the comissioning. Field uniformity: +/- 10 HU in selected ROIs. Complete CT to ED curve for each tube voltage must comply with the curve obtained at comissioning, with deviations of not more than 5%. Spatial and contrast resultion tests must comply with the tests obtained at comissioning, otherwise machine requires service. Result of image noise test must fall within the limit of 20% difference of the base value. Slice thickness must meet manufacturer specifications, and patient stability with longitudinal transfer of loaded table must not differ of more than 2mm vertical deviation. Conclusion: The implemented QA tests gave overall basic understanding of CT simulator functionality and its clinical effectiveness in radiation treatment planning. The legal requirement to the clinic is to set up it’s own QA programme, with minimum testing, but it remains user’s decision whether additional testing, as recommended by international organizations, will be implemented, so to improve the overall quality of radiation treatment planning procedure, as the CT image quality used for radiation treatment planning, influences the delineation of a tumor and calculation accuracy of treatment planning system, and finally delivery of radiation treatment to a patient.

Keywords: CT simulator, radiotherapy, quality control, QA programme

Procedia PDF Downloads 518
2604 Motor Controller Implementation Using Model Based Design

Authors: Cau Tran, Tu Nguyen, Tien Pham

Abstract:

Model-based design (MBD) is a mathematical and visual technique for addressing design issues in the fields of communications, signal processing, and complicated control systems. It is utilized in several automotive, aerospace, industrial, and motion control applications. Virtual models are at the center of the software development process with model based design. A method used in the creation of embedded software is model-based design. In this study, the LAT motor is modeled in a simulation environment, and the LAT motor control is designed with a cascade structure, a speed and current control loop, and a controller that is used in the next part. A PID structure serves as this controller. Based on techniques and motor parameters that match the design goals, the PID controller is created for the model using traditional design principles. The MBD approach will be used to build embedded software for motor control. The paper will be divided into three distinct sections. The first section will introduce the design process and the benefits and drawbacks of the MBD technique. The design of control software for LAT motors will be the main topic of the next section. The experiment's results are the subject of the last section.

Keywords: model based design, limited angle torque, intellectual property core, hardware description language, controller area network, user datagram protocol

Procedia PDF Downloads 89
2603 Design of Labview Based DAQ System

Authors: Omar A. A. Shaebi, Matouk M. Elamari, Salaheddin Allid

Abstract:

The Information Computing System of Monitoring (ICSM) for the Research Reactor of Tajoura Nuclear Research Centre (TNRC) stopped working since early 1991. According to the regulations, the computer is necessary to operate the reactor up to its maximum power (10 MW). The fund is secured via IAEA to develop a modern computer based data acquisition system to replace the old computer. This paper presents the development of the Labview based data acquisition system to allow automated measurements using National Instruments Hardware and its labview software. The developed system consists of SCXI 1001 chassis, the chassis house four SCXI 1100 modules each can maintain 32 variables. The chassis is interfaced with the PC using NI PCI-6023 DAQ Card. Labview, developed by National Instruments, is used to run and operate the DAQ System. Labview is graphical programming environment suited for high level design. It allows integrating different signal processing components or subsystems within a graphical framework. The results showed system capabilities in monitoring variables, acquiring and saving data. Plus the capability of the labview to control the DAQ.

Keywords: data acquisition, labview, signal conditioning, national instruments

Procedia PDF Downloads 485
2602 Spiking Behavior in Memristors with Shared Top Electrode Configuration

Authors: B. Manoj Kumar, C. Malavika, E. S. Kannan

Abstract:

The objective of this study is to investigate the switching behavior of two vertically aligned memristors connected by a shared top electrode, a configuration that significantly deviates from the conventional single oxide layer sandwiched between two electrodes. The device is fabricated by bridging copper electrodes with mechanically exfoliated van der Waals metal (specifically tantalum disulfide and tantalum diselenide). The device demonstrates threshold-switching behavior in its I-V characteristics. When the input voltage signal is ramped with voltages below the threshold, the output current shows spiking behavior, resembling integrated and firing actions without extra circuitry. We also investigated the self-reset behavior of the device. Using a continuous constant voltage bias, we activated the device to the firing state. After removing the bias and reapplying it shortly afterward, the current returned to its initial state. This indicates that the device can spontaneously return to its resting state. The outcome of this investigation offers a fresh perspective on memristor-based device design and an efficient method to construct hardware for neuromorphic computing systems.

Keywords: integrated and firing, memristor, spiking behavior, threshold switching

Procedia PDF Downloads 45
2601 An Accurate Computation of 2D Zernike Moments via Fast Fourier Transform

Authors: Mohammed S. Al-Rawi, J. Bastos, J. Rodriguez

Abstract:

Object detection and object recognition are essential components of every computer vision system. Despite the high computational complexity and other problems related to numerical stability and accuracy, Zernike moments of 2D images (ZMs) have shown resilience when used in object recognition and have been used in various image analysis applications. In this work, we propose a novel method for computing ZMs via Fast Fourier Transform (FFT). Notably, this is the first algorithm that can generate ZMs up to extremely high orders accurately, e.g., it can be used to generate ZMs for orders up to 1000 or even higher. Furthermore, the proposed method is also simpler and faster than the other methods due to the availability of FFT software and/or hardware. The accuracies and numerical stability of ZMs computed via FFT have been confirmed using the orthogonality property. We also introduce normalizing ZMs with Neumann factor when the image is embedded in a larger grid, and color image reconstruction based on RGB normalization of the reconstructed images. Astonishingly, higher-order image reconstruction experiments show that the proposed methods are superior, both quantitatively and subjectively, compared to the q-recursive method.

Keywords: Chebyshev polynomial, fourier transform, fast algorithms, image recognition, pseudo Zernike moments, Zernike moments

Procedia PDF Downloads 253
2600 Comparison of FASTMAP and B0 Field Map Shimming for 4T MRI

Authors: Mohan L. Jayatiake, Judd Storrs, Jing-Huei Lee

Abstract:

The optimal MRI resolution relies on a homogeneous magnetic field. However, local susceptibility variations can lead to field inhomogeneities that cause artifacts such as image distortion and signal loss. The effects of local susceptibility variation notoriously increase with magnetic field strength. Active shimming improves homogeneity by applying corrective fields generated from shim coils, but requires calculation of optimal current for each shim coil. FASTMAP (fast automatic shimming technique by mapping along projections) is an effective technique for finding optimal currents works well at high-field, but is restricted to shimming spherical regions of interest. The 3D gradient-echo pulse sequence was modified to reduce sensitivity to eddy currents and used to obtain susceptibility field maps at 4T. Measured fields were projected onto first-and second-order spherical harmonic functions corresponding to shim hardware. A spherical phantom was used to calibrate the shim currents. Susceptibility maps of a volunteer’s brain with and without FASTMAP shimming were obtained. Simulations indicate that optimal shim currents derived from the field map may provide better overall shimming of the human brain.

Keywords: shimming, high-field, active, passive

Procedia PDF Downloads 490
2599 Dynamic Test for Sway-Mode Buckling of Columns

Authors: Boris Blostotsky, Elia Efraim

Abstract:

Testing of columns in sway mode is performed in order to determine the maximal allowable load limited by plastic deformations or their end connections and a critical load limited by columns stability. Motivation to determine accurate value of critical force is caused by its using as follow: - critical load is maximal allowable load for given column configuration and can be used as criterion of perfection; - it is used in calculation prescribed by standards for design of structural elements under combined action of compression and bending; - it is used for verification of theoretical analysis of stability at various end conditions of columns. In the present work a new non-destructive method for determination of columns critical buckling load in sway mode is proposed. The method allows performing measurements during the tests under loads that exceeds the columns critical load without losing its stability. The possibility of such loading is achieved by structure of the loading system. The system is performed as frame with rigid girder, one of the columns is the tested column and the other is additional two-hinged strut. Loading of the frame is carried out by the flexible traction element attached to the girder. The load applied on the tested column can achieve a values that exceed the critical load by choice of parameters of the traction element and the additional strut. The system lateral stiffness and the column critical load are obtained by the dynamic method. The experiment planning and the comparison between the experimental and theoretical values were performed based on the developed dependency of lateral stiffness of the system on vertical load, taking into account a semi-rigid connections of the column's ends. The agreement between the obtained results was established. The method can be used for testing of real full-size columns in industrial conditions.

Keywords: buckling, columns, dynamic method, semi-rigid connections, sway mode

Procedia PDF Downloads 303
2598 Software Reliability Prediction Model Analysis

Authors: Lela Mirtskhulava, Mariam Khunjgurua, Nino Lomineishvili, Koba Bakuria

Abstract:

Software reliability prediction gives a great opportunity to measure the software failure rate at any point throughout system test. A software reliability prediction model provides with the technique for improving reliability. Software reliability is very important factor for estimating overall system reliability, which depends on the individual component reliabilities. It differs from hardware reliability in that it reflects the design perfection. Main reason of software reliability problems is high complexity of software. Various approaches can be used to improve the reliability of software. We focus on software reliability model in this article, assuming that there is a time redundancy, the value of which (the number of repeated transmission of basic blocks) can be an optimization parameter. We consider given mathematical model in the assumption that in the system may occur not only irreversible failures, but also a failure that can be taken as self-repairing failures that significantly affect the reliability and accuracy of information transfer. Main task of the given paper is to find a time distribution function (DF) of instructions sequence transmission, which consists of random number of basic blocks. We consider the system software unreliable; the time between adjacent failures has exponential distribution.

Keywords: exponential distribution, conditional mean time to failure, distribution function, mathematical model, software reliability

Procedia PDF Downloads 452
2597 A Plan of Smart Management for Groundwater Resources

Authors: Jennifer Chen, Pei Y. Hsu, Yu W. Chen

Abstract:

Groundwater resources play a vital role in regional water supply because over 1/3 of total demand is satisfied by groundwater resources. Because over-pumpage might cause environmental impact such as land subsidence, a sustainable management of groundwater resource is required. In this study, a blueprint of smart management for groundwater resource is proposed and planned. The framework of the smart management can be divided into two major parts, hardware and software parts. First, an internet of groundwater (IoG) which is inspired by the internet of thing (IoT) is proposed to observe the migration of groundwater usage and the associated response, groundwater levels. Second, algorithms based on data mining and signal analysis are proposed to achieve the goal of providing highly efficient management of groundwater. The entire blueprint is a 4-year plan and this year is the first year. We have finished the installation of 50 flow meters and 17 observation wells. An underground hydrological model is proposed to determine the associated drawdown caused by the measured pumpages. Besides, an alternative to the flow meter is also proposed to decrease the installation cost of IoG. An accelerometer and 3G remote transmission are proposed to detect the on and off of groundwater pumpage.

Keywords: groundwater management, internet of groundwater, underground hydrological model, alternative of flow meter

Procedia PDF Downloads 367
2596 Reactivation of Hydrated Cement and Recycled Concrete Powder by Thermal Treatment for Partial Replacement of Virgin Cement

Authors: Gustave Semugaza, Anne Zora Gierth, Tommy Mielke, Marianela Escobar Castillo, Nat Doru C. Lupascu

Abstract:

The generation of Construction and Demolition Waste (CDW) has globally increased enormously due to the enhanced need in construction, renovation, and demolition of construction structures. Several studies investigated the use of CDW materials in the production of new concrete and indicated the lower mechanical properties of the resulting concrete. Many other researchers considered the possibility of using the Hydrated Cement Powder (HCP) to replace a part of Ordinary Portland Cement (OPC), but only very few investigated the use of Recycled Concrete Powder (RCP) from CDW. The partial replacement of OPC for making new concrete intends to decrease the CO₂ emissions associated with OPC production. However, the RCP and HCP need treatment to produce the new concrete of required mechanical properties. The thermal treatment method has proven to improve HCP properties before their use. Previous research has stated that for using HCP in concrete, the optimum results are achievable by heating HCP between 400°C and 800°C. The optimum heating temperature depends on the type of cement used to make the Hydrated Cement Specimens (HCS), the crushing and heating method of HCP, and the curing method of the Rehydrated Cement Specimens (RCS). This research assessed the quality of recycled materials by using different techniques such as X-ray Diffraction (XRD), Differential Scanning Calorimetry (DSC) and thermogravimetry (TG), Scanning electron Microscopy (SEM), and X-ray Fluorescence (XRF). These recycled materials were thermally pretreated at different temperatures from 200°C to 1000°C. Additionally, the research investigated to what extent the thermally treated recycled cement could partially replace the OPC and if the new concrete produced would achieve the required mechanical properties. The mechanical properties were evaluated on the RCS, obtained by mixing the Dehydrated Cement Powder and Recycled Powder (DCP and DRP) with water (w/c = 0.6 and w/c = 0.45). The research used the compressive testing machine for compressive strength testing, and the three-point bending test was used to assess the flexural strength.

Keywords: hydrated cement powder, dehydrated cement powder, recycled concrete powder, thermal treatment, reactivation, mechanical performance

Procedia PDF Downloads 136
2595 Development and Testing of an Instrument to Measure Beliefs about Cervical Cancer Screening among Women in Botswana

Authors: Ditsapelo M. McFarland

Abstract:

Background: Despite the availability of the Pap smear services in urban areas in Botswana, most women in such areas do not seem to screen regular for prevention of the cervical cancer disease. Reasons for non-use of the available Pap smear services are not well understood. Beliefs about cancer may influence participation in cancer screening in these women. The purpose of this study was to develop an instrument to measure beliefs about cervical cancer and Pap smear screening among Black women in Botswana, and evaluate the psychometric properties of the instrument. Significance: Instruments that are designed to measure beliefs about cervical cancer and screening among black women in Botswana, as well as in the surrounding region, are presently not available. Valid and reliable instruments are needed for exploration of the women’s beliefs about cervical cancer. Conceptual Framework: The Health Belief Model (HBM) provided a conceptual framework for the study. Methodology: The study was done in four phases: Phase 1: item generation: 15 items were generated from literature review and qualitative data for each of four conceptually defined HBM constructs: Perceived susceptibility, severity, benefits, and barriers (Version 1). Phase 2: content validity: Four experts who were advanced practice nurses of African descent and were familiar with the content and the HBM evaluated the content. Experts rated the items on a 4-point Likert scale ranging from: 1=not relevant, 2=somewhat relevant, 3=relevant and 4=very relevant. Fifty-five items were retained for instrument development: perceived susceptibility - 11, severity - 14, benefits - 15 and barriers - 15, all measuring on a 4-point Likert scale ranging from strongly disagree (1) to strongly agree (4). (Version 2). Phase 3: pilot testing: The instrument was pilot tested on a convenient sample of 30 women in Botswana and revised as needed. Phase 4: reliability: the revised instrument (Version 3) was submitted to a larger sample of women in Botswana (n=300) for reliability testing. The sample included women who were Batswana by birth and decent, were aged 30 years and above and could complete an English questionnaire. Data were collected with the assistance of trained research assistants. Major findings: confirmatory factor analysis of the 55 items found that a number of items did not adequately load in a four-factor solution. Items that exhibited reasonable reliability and had low frequency of missing values (n=36) were retained: perceived barriers (14 items), perceived benefits (8 items), perceived severity (4 items), and perceived susceptibility (10 items). confirmatory factor analysis (principle components) for a four factor solution using varimax rotation demonstrated that these four factors explained 43% of the variation in these 36 items. Conclusion: reliability analysis using Cronbach’s Alpha gave generally satisfactory results with values from 0.53 to 0.89.

Keywords: cervical cancer, factor analysis, psychometric evaluation, varimax rotation

Procedia PDF Downloads 118
2594 Model-Based Automotive Partitioning and Mapping for Embedded Multicore Systems

Authors: Robert Höttger, Lukas Krawczyk, Burkhard Igel

Abstract:

This paper introduces novel approaches to partitioning and mapping in terms of model-based embedded multicore system engineering and further discusses benefits, industrial relevance and features in common with existing approaches. In order to assess and evaluate results, both approaches have been applied to a real industrial application as well as to various prototypical demonstrative applications, that have been developed and implemented for different purposes. Evaluations show, that such applications improve significantly according to performance, energy efficiency, meeting timing constraints and covering maintaining issues by using the AMALTHEA platform and the implemented approaches. Further- more, the model-based design provides an open, expandable, platform independent and scalable exchange format between OEMs, suppliers and developers on different levels. Our proposed mechanisms provide meaningful multicore system utilization since load balancing by means of partitioning and mapping is effectively performed with regard to the modeled systems including hardware, software, operating system, scheduling, constraints, configuration and more data.

Keywords: partitioning, mapping, distributed systems, scheduling, embedded multicore systems, model-based, system analysis

Procedia PDF Downloads 607
2593 Prediction Study of a Corroded Pressure Vessel Using Evaluation Measurements and Finite Element Analysis

Authors: Ganbat Danaa, Chuluundorj Puntsag

Abstract:

The steel structures of the Oyu-Tolgoi mining Concentrator plant are corroded during operation, which raises doubts about the continued use of some important structures of the plant, which is one of the problems facing the plant's regular operation. As a part of the main operation of the plant, the bottom part of the pressure vessel, which plays an important role in the reliable operation of the concentrate filter-drying unit, was heavily corroded, so it was necessary to study by engineering calculations, modeling, and simulation using modern advanced engineering programs and methods. The purpose of this research is to investigate whether the corroded part of the pressure vessel can be used normally in the future using advanced engineering software and to predetermine the remaining life of the time of the pressure vessel based on engineering calculations. When the thickness of the bottom part of the pressure vessel was thinned by 0.5mm due to corrosion detected by non-destructive testing, finite element analysis using ANSYS WorkBench software was used to determine the mechanical stress, strain and safety factor in the wall and bottom of the pressure vessel operating under 2.2 MPa working pressure, made conclusions on whether it can be used in the future. According to the recommendations, by using sand-blast cleaning and anti-corrosion paint, the normal, continuous and reliable operation of the Concentrator plant can be ensured, such as ordering new pressure vessels and reducing the installation period. By completing this research work, it will be used as a benchmark for assessing the corrosion condition of steel parts of pressure vessels and other metallic and non-metallic structures operating under severe conditions of corrosion, static and dynamic loads, and other deformed steels to make analysis of the structures and make it possible to evaluate and control the integrity and reliable operation of the structures.

Keywords: corrosion, non-destructive testing, finite element analysis, safety factor, structural reliability

Procedia PDF Downloads 45
2592 Impact of Extended Enterprise Resource Planning in the Context of Cloud Computing on Industries and Organizations

Authors: Gholamreza Momenzadeh, Forough Nematolahi

Abstract:

The Extended Enterprise Resource Planning (ERPII) system usually requires massive amounts of storage space, powerful servers, and large upfront and ongoing investments to purchase and manage the software and the related hardware which are not affordable for organizations. In recent decades, organizations prefer to adapt their business structures with new technologies for remaining competitive in the world economy. Therefore, cloud computing (which is one of the tools of information technology (IT)) is a modern system that reveals the next-generation application architecture. Also, cloud computing has had some advantages that reduce costs in many ways such as: lower upfront costs for all computing infrastructure and lower cost of maintaining and supporting. On the other hand, traditional ERPII is not responding for huge amounts of data and relations between the organizations. In this study, based on a literature study, ERPII is investigated in the context of cloud computing where the organizations operate more efficiently. Also, ERPII conditions have a response to needs of organizations in large amounts of data and relations between the organizations.

Keywords: extended enterprise resource planning, cloud computing, business process, enterprise information integration

Procedia PDF Downloads 205
2591 Using Industrial Service Quality to Assess Service Quality Perception in Television Advertisement: A Case Study

Authors: Ana L. Martins, Rita S. Saraiva, João C. Ferreira

Abstract:

Much effort has been placed on the assessment of perceived service quality. Several models can be found in literature, but these are mainly focused on business-to-consumer (B2C) relationships. Literature on how to assess perceived quality in business-to-business (B2B) contexts is scarce both conceptually and in terms of its application. This research aims at filling this gap in literature by applying INDSERV to a case study situation. Under this scope, this research aims at analyzing the adequacy of the proposed assessment tool to other context besides the one where it was developed and by doing so analyzing the perceive quality of the advertisement service provided by a specific television network to its B2B customers. The INDSERV scale was adopted and applied to a sample of 33 clients, via questionnaires adapted to interviews. Data was collected in person or phone. Both quantitative and qualitative data collection was performed. Qualitative data analysis followed content analysis protocol. Quantitative analysis used hypotheses testing. Findings allowed to conclude that the perceived quality of the television service provided by television network is very positive, being the Soft Process Quality the parameter that reveals the highest perceived quality of the service as opposed to Potential Quality. To this end, some comments and suggestions were made by the clients regarding each one of these service quality parameters. Based on the hypotheses testing, it was noticed that only advertisement clients that maintain a connection to the television network from 5 to 10 years do show a significant different perception of the TV advertisement service provided by the company in what the Hard Process Quality parameter is concerned. Through the collected data content analysis, it was possible to obtain the percentage of clients which share the same opinions and suggestions for improvement. Finally, based on one of the four service quality parameter in a B2B context, managerial suggestions were developed aiming at improving the television network advertisement perceived quality service.

Keywords: B2B, case study, INDSERV, perceived service quality

Procedia PDF Downloads 198
2590 A Low-Cost Air Quality Monitoring Internet of Things Platform

Authors: Christos Spandonidis, Stefanos Tsantilas, Elias Sedikos, Nektarios Galiatsatos, Fotios Giannopoulos, Panagiotis Papadopoulos, Nikolaos Demagos, Dimitrios Reppas, Christos Giordamlis

Abstract:

In the present paper, a low cost, compact and modular Internet of Things (IoT) platform for air quality monitoring in urban areas is presented. This platform comprises of dedicated low cost, low power hardware and the associated embedded software that enable measurement of particles (PM2.5 and PM10), NO, CO, CO2 and O3 concentration in the air, along with relative temperature and humidity. This integrated platform acts as part of a greater air pollution data collecting wireless network that is able to monitor the air quality in various regions and neighborhoods of an urban area, by providing sensor measurements at a high rate that reaches up to one sample per second. It is therefore suitable for Big Data analysis applications such as air quality forecasts, weather forecasts and traffic prediction. The first real world test for the developed platform took place in Thessaloniki, Greece, where 16 devices were installed in various buildings in the city. In the near future, many more of these devices are going to be installed in the greater Thessaloniki area, giving a detailed air quality map of the city.

Keywords: distributed sensor system, environmental monitoring, Internet of Things, smart cities

Procedia PDF Downloads 132
2589 The Use of Hearing Protection Devices and Hearing Loss in Steel Industry Workers in Samut Prakan Province, Thailand

Authors: Petcharat Kerdonfag, Surasak Taneepanichskul, Winai Wadwongtham

Abstract:

Background: Although there have not been effective treatments for Noise Induced Hearing Loss (NIHL), it can be definitely preventable with promoting the use of Hearing Protection devices (HPDs) among workers who have been exposed to excessive noise for a long period. Objectives: The objectives of this study were to explore the use of HPDs among steel industrial workers in the high noise level zone in Samut Prakan province, Thailand and to examine the relationships of the HPDs use and hearing loss. Materials and Methods: In this cross-sectional study, eligible ninety-three participants were recruited in the designated zone of higher noise (> 85dBA) of two factories, using simple random sampling. The use of HPDs was gathered by the self-record form, examined and confirmed by the researcher team. Hearing loss was assessed by the audiometric screening at the regional Samut Prakan hospital. If an average threshold level exceeds 25 dBA at high frequency (4 and 6 Hz) in each ear, participants would be lost of hearing. Data were collected from October to December, 2016. All participants were examined by the same examiners for the validity. An Audiometric testing was performed with the participants who have been exposed to high noise levels at least 14 hours from workplace. Results: Sixty participants (64.5%) had secondary level of education. The average mean score of percent time of using HPDs was 60.5% (SD = 25.34). Sixty-seven participants (72.0%) had abnormal hearing which they have still needed to increase lower percent time of using HPDs (Mean = 37.01, SD = 23.81) than those having normal hearing (Mean = 45.77, SD = 28.44). However, there was no difference in the mean average of percent time of using HPDs between these two groups.Conclusion: The findings of this study have confirmed that the steel industrial workers still need to be motivated to use HPDs regularly. Future research should pay more attentions for creating a meaningful innovation to steel industrial workers.

Keywords: hearing protection devices, noise induced hearing loss, audiometric testing, steel industry

Procedia PDF Downloads 239
2588 A Machine Learning Based Method to Detect System Failure in Resource Constrained Environment

Authors: Payel Datta, Abhishek Das, Abhishek Roychoudhury, Dhiman Chattopadhyay, Tanushyam Chattopadhyay

Abstract:

Machine learning (ML) and deep learning (DL) is most predominantly used in image/video processing, natural language processing (NLP), audio and speech recognition but not that much used in system performance evaluation. In this paper, authors are going to describe the architecture of an abstraction layer constructed using ML/DL to detect the system failure. This proposed system is used to detect the system failure by evaluating the performance metrics of an IoT service deployment under constrained infrastructure environment. This system has been tested on the manually annotated data set containing different metrics of the system, like number of threads, throughput, average response time, CPU usage, memory usage, network input/output captured in different hardware environments like edge (atom based gateway) and cloud (AWS EC2). The main challenge of developing such system is that the accuracy of classification should be 100% as the error in the system has an impact on the degradation of the service performance and thus consequently affect the reliability and high availability which is mandatory for an IoT system. Proposed ML/DL classifiers work with 100% accuracy for the data set of nearly 4,000 samples captured within the organization.

Keywords: machine learning, system performance, performance metrics, IoT, edge

Procedia PDF Downloads 184
2587 Improvement of Direct Torque and Flux Control of Dual Stator Induction Motor Drive Using Intelligent Techniques

Authors: Kouzi Katia

Abstract:

This paper proposes a Direct Torque Control (DTC) algorithm of dual Stator Induction Motor (DSIM) drive using two approach intelligent techniques: Artificial Neural Network (ANN) approach replaces the switching table selector block of conventional DTC and Mamdani Fuzzy Logic controller (FLC) is used for stator resistance estimation. The fuzzy estimation method is based on an online stator resistance correction through the variations of stator current estimation error and its variation. The fuzzy logic controller gives the future stator resistance increment at the output. The main advantage of suggested algorithm control is to reduce the hardware complexity of conventional selectors, to avoid the drive instability that may occur in certain situation and ensure the tracking of the actual of the stator resistance. The effectiveness of the technique and the improvement of the whole system performance are proved by results.

Keywords: artificial neural network, direct torque control, dual stator induction motor, fuzzy logic estimator, switching table

Procedia PDF Downloads 331
2586 Building Energy Modeling for Networks of Data Centers

Authors: Eric Kumar, Erica Cochran, Zhiang Zhang, Wei Liang, Ronak Mody

Abstract:

The objective of this article was to create a modelling framework that exposes the marginal costs of shifting workloads across geographically distributed data-centers. Geographical distribution of internet services helps to optimize their performance for localized end users with lowered communications times and increased availability. However, due to the geographical and temporal effects, the physical embodiments of a service's data center infrastructure can vary greatly. In this work, we first identify that the sources of variances in the physical infrastructure primarily stem from local weather conditions, specific user traffic profiles, energy sources, and the types of IT hardware available at the time of deployment. Second, we create a traffic simulator that indicates the IT load at each data-center in the set as an approximator for user traffic profiles. Third, we implement a framework that quantifies the global level energy demands using building energy models and the traffic profiles. The results of the model provide a time series of energy demands that can be used for further life cycle analysis of internet services.

Keywords: data-centers, energy, life cycle, network simulation

Procedia PDF Downloads 134
2585 Glucose Monitoring System Using Machine Learning Algorithms

Authors: Sangeeta Palekar, Neeraj Rangwani, Akash Poddar, Jayu Kalambe

Abstract:

The bio-medical analysis is an indispensable procedure for identifying health-related diseases like diabetes. Monitoring the glucose level in our body regularly helps us identify hyperglycemia and hypoglycemia, which can cause severe medical problems like nerve damage or kidney diseases. This paper presents a method for predicting the glucose concentration in blood samples using image processing and machine learning algorithms. The glucose solution is prepared by the glucose oxidase (GOD) and peroxidase (POD) method. An experimental database is generated based on the colorimetric technique. The image of the glucose solution is captured by the raspberry pi camera and analyzed using image processing by extracting the RGB, HSV, LUX color space values. Regression algorithms like multiple linear regression, decision tree, RandomForest, and XGBoost were used to predict the unknown glucose concentration. The multiple linear regression algorithm predicts the results with 97% accuracy. The image processing and machine learning-based approach reduce the hardware complexities of existing platforms.

Keywords: artificial intelligence glucose detection, glucose oxidase, peroxidase, image processing, machine learning

Procedia PDF Downloads 188
2584 In Silico Modeling of Drugs Milk/Plasma Ratio in Human Breast Milk Using Structures Descriptors

Authors: Navid Kaboudi, Ali Shayanfar

Abstract:

Introduction: Feeding infants with safe milk from the beginning of their life is an important issue. Drugs which are used by mothers can affect the composition of milk in a way that is not only unsuitable, but also toxic for infants. Consuming permeable drugs during that sensitive period by mother could lead to serious side effects to the infant. Due to the ethical restrictions of drug testing on humans, especially women, during their lactation period, computational approaches based on structural parameters could be useful. The aim of this study is to develop mechanistic models to predict the M/P ratio of drugs during breastfeeding period based on their structural descriptors. Methods: Two hundred and nine different chemicals with their M/P ratio were used in this study. All drugs were categorized into two groups based on their M/P value as Malone classification: 1: Drugs with M/P>1, which are considered as high risk 2: Drugs with M/P>1, which are considered as low risk Thirty eight chemical descriptors were calculated by ACD/labs 6.00 and Data warrior software in order to assess the penetration during breastfeeding period. Later on, four specific models based on the number of hydrogen bond acceptors, polar surface area, total surface area, and number of acidic oxygen were established for the prediction. The mentioned descriptors can predict the penetration with an acceptable accuracy. For the remaining compounds (N= 147, 158, 160, and 174 for models 1 to 4, respectively) of each model binary regression with SPSS 21 was done in order to give us a model to predict the penetration ratio of compounds. Only structural descriptors with p-value<0.1 remained in the final model. Results and discussion: Four different models based on the number of hydrogen bond acceptors, polar surface area, and total surface area were obtained in order to predict the penetration of drugs into human milk during breastfeeding period About 3-4% of milk consists of lipids, and the amount of lipid after parturition increases. Lipid soluble drugs diffuse alongside with fats from plasma to mammary glands. lipophilicity plays a vital role in predicting the penetration class of drugs during lactation period. It was shown in the logistic regression models that compounds with number of hydrogen bond acceptors, PSA and TSA above 5, 90 and 25 respectively, are less permeable to milk because they are less soluble in the amount of fats in milk. The pH of milk is acidic and due to that, basic compounds tend to be concentrated in milk than plasma while acidic compounds may consist lower concentrations in milk than plasma. Conclusion: In this study, we developed four regression-based models to predict the penetration class of drugs during the lactation period. The obtained models can lead to a higher speed in drug development process, saving energy, and costs. Milk/plasma ratio assessment of drugs requires multiple steps of animal testing, which has its own ethical issues. QSAR modeling could help scientist to reduce the amount of animal testing, and our models are also eligible to do that.

Keywords: logistic regression, breastfeeding, descriptors, penetration

Procedia PDF Downloads 60