Search results for: lateral flow immunoassay on-site detection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8801

Search results for: lateral flow immunoassay on-site detection

5561 A Combination of Independent Component Analysis, Relative Wavelet Energy and Support Vector Machine for Mental State Classification

Authors: Nguyen The Hoang Anh, Tran Huy Hoang, Vu Tat Thang, T. T. Quyen Bui

Abstract:

Mental state classification is an important step for realizing a control system based on electroencephalography (EEG) signals which could benefit a lot of paralyzed people including the locked-in or Amyotrophic Lateral Sclerosis. Considering that EEG signals are nonstationary and often contaminated by various types of artifacts, classifying thoughts into correct mental states is not a trivial problem. In this work, our contribution is that we present and realize a novel model which integrates different techniques: Independent component analysis (ICA), relative wavelet energy, and support vector machine (SVM) for the same task. We applied our model to classify thoughts in two types of experiment whether with two or three mental states. The experimental results show that the presented model outperforms other models using Artificial Neural Network, K-Nearest Neighbors, etc.

Keywords: EEG, ICA, SVM, wavelet

Procedia PDF Downloads 384
5560 Development of Sulfite Biosensor Based on Sulfite Oxidase Immobilized on 3-Aminoproplytriethoxysilane Modified Indium Tin Oxide Electrode

Authors: Pawasuth Saengdee, Chamras Promptmas, Ting Zeng, Silke Leimkühler, Ulla Wollenberger

Abstract:

Sulfite has been used as a versatile preservative to limit the microbial growth and to control the taste in some food and beverage. However, it has been reported to cause a wide spectrum of severe adverse reactions. Therefore, it is important to determine the amount of sulfite in food and beverage to ensure consumer safety. An efficient electrocatalytic biosensor for sulfite detection was developed by immobilizing of human sulfite oxidase (hSO) on 3-aminoproplytriethoxysilane (APTES) modified indium tin oxide (ITO) electrode. Cyclic voltammetry was employed to investigate the electrochemical characteristics of the hSO modified ITO electrode for various pretreatment and binding conditions. Amperometry was also utilized to demonstrate the current responses of the sulfite sensor toward sodium sulfite in an aqueous solution at a potential of 0 V (vs. Ag/AgCl 1 M KCl). The proposed sulfite sensor has a linear range between 0.5 to 2 mM with a correlation coefficient 0.972. Then, the additional polymer layer of PVA was introduced to extend the linear range of sulfite sensor and protect the enzyme. The linear range of sulfite sensor with 5% coverage increases from 2.8 to 20 mM at a correlation coefficient of 0.983. In addition, the stability of sulfite sensor with 5% PVA coverage increases until 14 days when kept in 0.5 mM Tris-buffer, pH 7.0 at 4 8C. Therefore, this sensor could be applied for the detection of sulfite in the real sample, especially in food and beverage.

Keywords: sulfite oxidase, bioelectrocatalytsis, indium tin oxide, direct electrochemistry, sulfite sensor

Procedia PDF Downloads 231
5559 The Behavior of Self-Compacting Light Weight Concrete Produced by Magnetic Water

Authors: Moosa Mazloom, Hojjat Hatami

Abstract:

The aim of this article is to access the optimal mix design of self-compacting light weight concrete. The effects of magnetic water, superplasticizer based on polycarboxylic-ether, and silica fume on characteristics of this type of concrete are studied. The workability of fresh concrete and the compressive strength of hardened concrete are considered here. For this purpose, nine mix designs were studied. The percentages of superplasticizer were 0.5, 1, and 2% of the weight of cement, and the percentages of silica fume were 0, 6, and 10% of the weight of cement. The water to cementitious ratios were 0.28, 0.32, and 0.36. The workability of concrete samples was analyzed by the devices such as slump flow, V-funnel, L box, U box, and Urimet with J ring. Then, the compressive strengths of the mixes at the ages of 3, 7, 28, and 90 days were obtained. The results show that by using magnetic water, the compressive strengths are improved at all the ages. In the concrete samples with ordinary water, more superplasticizer dosages were needed. Moreover, the combination of superplasticizer and magnetic water had positive effects on the mixes containing silica fume and they could flow easily.

Keywords: magnetic water, self-compacting light weight concrete, silica fume, superplasticizer

Procedia PDF Downloads 368
5558 A Machine Learning Approach for Anomaly Detection in Environmental IoT-Driven Wastewater Purification Systems

Authors: Giovanni Cicceri, Roberta Maisano, Nathalie Morey, Salvatore Distefano

Abstract:

The main goal of this paper is to present a solution for a water purification system based on an Environmental Internet of Things (EIoT) platform to monitor and control water quality and machine learning (ML) models to support decision making and speed up the processes of purification of water. A real case study has been implemented by deploying an EIoT platform and a network of devices, called Gramb meters and belonging to the Gramb project, on wastewater purification systems located in Calabria, south of Italy. The data thus collected are used to control the wastewater quality, detect anomalies and predict the behaviour of the purification system. To this extent, three different statistical and machine learning models have been adopted and thus compared: Autoregressive Integrated Moving Average (ARIMA), Long Short Term Memory (LSTM) autoencoder, and Facebook Prophet (FP). The results demonstrated that the ML solution (LSTM) out-perform classical statistical approaches (ARIMA, FP), in terms of both accuracy, efficiency and effectiveness in monitoring and controlling the wastewater purification processes.

Keywords: environmental internet of things, EIoT, machine learning, anomaly detection, environment monitoring

Procedia PDF Downloads 151
5557 Reliability Based Optimal Design of Laterally Loaded Pile with Limited Residual Strain Energy Capacity

Authors: M. Movahedi Rad

Abstract:

In this study, a general approach to the reliability based limit analysis of laterally loaded piles is presented. In engineering practice, the uncertainties play a very important role. The aim of this study is to evaluate the lateral load capacity of free head and fixed-head long pile when the plastic limit analysis is considered. In addition to the plastic limit analysis to control the plastic behaviour of the structure, uncertain bound on the complementary strain energy of the residual forces is also applied. This bound has a significant effect for the load parameter. The solution to reliability-based problems is obtained by a computer program which is governed by the reliability index calculation.

Keywords: reliability, laterally loaded pile, residual strain energy, probability, limit analysis

Procedia PDF Downloads 349
5556 One Dimensional Unsteady Boundary Layer Flow in an Inclined Wavy Wall of a Nanofluid with Convective Boundary Condition

Authors: Abdulhakeem Yusuf, Yomi Monday Aiyesimi, Mohammed Jiya

Abstract:

The failure in an ordinary heat transfer fluid to meet up with today’s industrial cooling rate has resulted in the development of high thermal conductivity fluid which nanofluids belongs. In this work, the problem of unsteady one dimensional laminar flow of an incompressible fluid within a parallel wall is considered with one wall assumed to be wavy. The model is presented in its rectangular coordinate system and incorporates the effects of thermophoresis and Brownian motion. The local similarity solutions were also obtained which depends on Soret number, Dufour number, Biot number, Lewis number, and heat generation parameter. The analytical solution is obtained in a closed form via the Adomian decomposition method. It was found that the method has a good agreement with the numerical method, and it is also established that the heat generation parameter has to be kept low so that heat energy are easily evacuated from the system.

Keywords: Adomian decomposition method, Biot number, Dufour number, nanofluid

Procedia PDF Downloads 329
5555 Non-Targeted Adversarial Object Detection Attack: Fast Gradient Sign Method

Authors: Bandar Alahmadi, Manohar Mareboyana, Lethia Jackson

Abstract:

Today, there are many applications that are using computer vision models, such as face recognition, image classification, and object detection. The accuracy of these models is very important for the performance of these applications. One challenge that facing the computer vision models is the adversarial examples attack. In computer vision, the adversarial example is an image that is intentionally designed to cause the machine learning model to misclassify it. One of very well-known method that is used to attack the Convolution Neural Network (CNN) is Fast Gradient Sign Method (FGSM). The goal of this method is to find the perturbation that can fool the CNN using the gradient of the cost function of CNN. In this paper, we introduce a novel model that can attack Regional-Convolution Neural Network (R-CNN) that use FGSM. We first extract the regions that are detected by R-CNN, and then we resize these regions into the size of regular images. Then, we find the best perturbation of the regions that can fool CNN using FGSM. Next, we add the resulted perturbation to the attacked region to get a new region image that looks similar to the original image to human eyes. Finally, we placed the regions back to the original image and test the R-CNN with the attacked images. Our model could drop the accuracy of the R-CNN when we tested with Pascal VOC 2012 dataset.

Keywords: adversarial examples, attack, computer vision, image processing

Procedia PDF Downloads 193
5554 Quantitative Detection of the Conformational Transitions between Open and Closed Forms of Cytochrome P450 Oxidoreductase (CYPOR) at the Membrane Surface in Different Functional States

Authors: Sara Arafeh, Kovriguine Evguine

Abstract:

Cytochromes P450 are enzymes that require a supply of electrons to catalyze the synthesis of steroid hormones, fatty acids, and prostaglandin hormone. Cytochrome P450 Oxidoreductase (CYPOR), a membrane bound enzyme, provides these electrons in its open conformation. CYPOR has two cytosolic domains (FAD domain and FMN domain) and an N-terminal in the membrane. In its open conformation, electrons flow from NADPH, FAD, and finally to FMN where cytochrome P450 picks up these electrons. In the closed conformation, cytochrome P450 does not bind to the FMN domain to take the electrons. It was found that when the cytosolic domains are isolated, CYPOR could not bind to cytochrome P450. This suggested that the membrane environment is important for CYPOR function. This project takes the initiative to better understand the dynamics of CYPOR in its full length. Here, we determine the distance between specific sites in the FAD and FMN binding domains in CYPOR by Forster Resonance Energy Transfer (FRET) and Ultrafast TA spectroscopy with and without NADPH. The approach to determine these distances will rely on labeling these sites with red and infrared fluorophores. Mimic membrane attachment is done by inserting CYPOR in lipid nanodiscs. By determining the distances between the donor-acceptor sites in these domains, we can observe the open/closed conformations upon reducing CYPOR in the presence and absence of cytochrome P450. Such study is important to better understand CYPOR mechanism of action in various endosomal membranes including hepatic CYPOR which is vital in plasma cholesterol homeostasis. By investigating the conformational cycles of CYPOR, we can synthesize drugs that would be more efficient in affecting the steroid hormonal levels and metabolism of toxins catalyzed by Cytochrome P450.

Keywords: conformational cycle of CYPOR, cytochrome P450, cytochrome P450 oxidoreductase, FAD domain, FMN domain, FRET, Ultrafast TA Spectroscopy

Procedia PDF Downloads 279
5553 Characterization and Modelling of Groundwater Flow towards a Public Drinking Water Well Field: A Case Study of Ter Kamerenbos Well Field

Authors: Buruk Kitachew Wossenyeleh

Abstract:

Groundwater is the largest freshwater reservoir in the world. Like the other reservoirs of the hydrologic cycle, it is a finite resource. This study focused on the groundwater modeling of the Ter Kamerenbos well field to understand the groundwater flow system and the impact of different scenarios. The study area covers 68.9Km2 in the Brussels Capital Region and is situated in two river catchments, i.e., Zenne River and Woluwe Stream. The aquifer system has three layers, but in the modeling, they are considered as one layer due to their hydrogeological properties. The catchment aquifer system is replenished by direct recharge from rainfall. The groundwater recharge of the catchment is determined using the spatially distributed water balance model called WetSpass, and it varies annually from zero to 340mm. This groundwater recharge is used as the top boundary condition for the groundwater modeling of the study area. During the groundwater modeling using Processing MODFLOW, constant head boundary conditions are used in the north and south boundaries of the study area. For the east and west boundaries of the study area, head-dependent flow boundary conditions are used. The groundwater model is calibrated manually and automatically using observed hydraulic heads in 12 observation wells. The model performance evaluation showed that the root means the square error is 1.89m and that the NSE is 0.98. The head contour map of the simulated hydraulic heads indicates the flow direction in the catchment, mainly from the Woluwe to Zenne catchment. The simulated head in the study area varies from 13m to 78m. The higher hydraulic heads are found in the southwest of the study area, which has the forest as a land-use type. This calibrated model was run for the climate change scenario and well operation scenario. Climate change may cause the groundwater recharge to increase by 43% and decrease by 30% in 2100 from current conditions for the high and low climate change scenario, respectively. The groundwater head varies for a high climate change scenario from 13m to 82m, whereas for a low climate change scenario, it varies from 13m to 76m. If doubling of the pumping discharge assumed, the groundwater head varies from 13m to 76.5m. However, if the shutdown of the pumps is assumed, the head varies in the range of 13m to 79m. It is concluded that the groundwater model is done in a satisfactory way with some limitations, and the model output can be used to understand the aquifer system under steady-state conditions. Finally, some recommendations are made for the future use and improvement of the model.

Keywords: Ter Kamerenbos, groundwater modelling, WetSpass, climate change, well operation

Procedia PDF Downloads 152
5552 Development of Vertically Integrated 2D Lake Victoria Flow Models in COMSOL Multiphysics

Authors: Seema Paul, Jesper Oppelstrup, Roger Thunvik, Vladimir Cvetkovic

Abstract:

Lake Victoria is the second largest fresh water body in the world, located in East Africa with a catchment area of 250,000 km², of which 68,800 km² is the actual lake surface. The hydrodynamic processes of the shallow (40–80 m deep) water system are unique due to its location at the equator, which makes Coriolis effects weak. The paper describes a St.Venant shallow water model of Lake Victoria developed in COMSOL Multiphysics software, a general purpose finite element tool for solving partial differential equations. Depth soundings taken in smaller parts of the lake were combined with recent more extensive data to resolve the discrepancies of the lake shore coordinates. The topography model must have continuous gradients, and Delaunay triangulation with Gaussian smoothing was used to produce the lake depth model. The model shows large-scale flow patterns, passive tracer concentration and water level variations in response to river and tracer inflow, rain and evaporation, and wind stress. Actual data of precipitation, evaporation, in- and outflows were applied in a fifty-year simulation model. It should be noted that the water balance is dominated by rain and evaporation and model simulations are validated by Matlab and COMSOL. The model conserves water volume, the celerity gradients are very small, and the volume flow is very slow and irrotational except at river mouths. Numerical experiments show that the single outflow can be modelled by a simple linear control law responding only to mean water level, except for a few instances. Experiments with tracer input in rivers show very slow dispersion of the tracer, a result of the slow mean velocities, in turn, caused by the near-balance of rain with evaporation. The numerical and hydrodynamical model can evaluate the effects of wind stress which is exerted by the wind on the lake surface that will impact on lake water level. Also, model can evaluate the effects of the expected climate change, as manifest in changes to rainfall over the catchment area of Lake Victoria in the future.

Keywords: bathymetry, lake flow and steady state analysis, water level validation and concentration, wind stress

Procedia PDF Downloads 227
5551 Burnout and Salivary Cortisol Among Laboratory Personnel in Klang Valley, Malaysia During COVID-19 Pandemic

Authors: Maznieda Mahjom, Rohaida Ismail, Masita Arip, Mohd Shaiful Azlan, Nor’Ashikin Othman, Hafizah Abdullah, nor Zahrin Hasran, Joshita Jothimanickam, Syaqilah Shawaluddin, Nadia Mohamad, Raheel Nazakat, Tuan Mohd Amin, Mizanurfakhri Ghazali, Rosmanajihah Mat Lazim

Abstract:

COVID-19 outbreak is particularly detrimental to the mental health of everyone as well as leaving a long devastating crisis in the healthcare sector. Daily increment of COVID-19 cases and close contact, necessitating the testing of a large number of samples, thus increasing the workload and burden to laboratory personnel. This study aims to determine the prevalence of personal-, work- and client-related burnout as well as to measure the concentration of salivary cortisol among laboratory personnel in the main laboratories in Klang Valley, Malaysia. This cross-sectional study was conducted in late 2021 and recruited a total of 404 respondents from three laboratories in Klang Valley, Malaysia. The level of burnout was assessed using Copenhagen Burnout Inventory (CBI) comprising three sub-dimensions of personal-, work- and client-related burnout. The cut-off score of 50% and above indicated possible burnout. Meanwhile, salivary cortisol was measured using a competitive enzyme immunoassay kit (Salimetrics, State College, PA, USA). Normal levels of salivary cortisol concentration in adults are within 0.094 to 1.551 μg/dl (morning) and can be none detected to 0.359 μg/dl (evening). The prevalence of personal-, work- and client-related burnout among laboratory personnel were 36.1%, 17.8% and 7.2% respectively. Meanwhile, the abnormal morning and evening cortisol concentration recorded were 29.5% and 21.8% excluding 6.9%-7.4% missing data. While the IgA level is normal for most of the respondents, which recorded at 95.53%. Laboratory personnel were at risk of suffering burnout during the COVID-19 pandemic. Thus, mental health programs need to be addressed at the department and hospital level by regularly screening healthcare workers and designing an intervention program. It is also vital to improve the coping skills of laboratory personnel by increasing the awareness of good coping skill techniques. The training must be in an innovative way to ensure that the lab personnel can internalise the technique and practise it in real life.

Keywords: burnout, COVID-19, laborotary personnel, salivary cortisol

Procedia PDF Downloads 69
5550 A Neural Network Classifier for Estimation of the Degree of Infestation by Late Blight on Tomato Leaves

Authors: Gizelle K. Vianna, Gabriel V. Cunha, Gustavo S. Oliveira

Abstract:

Foliage diseases in plants can cause a reduction in both quality and quantity of agricultural production. Intelligent detection of plant diseases is an essential research topic as it may help monitoring large fields of crops by automatically detecting the symptoms of foliage diseases. This work investigates ways to recognize the late blight disease from the analysis of tomato digital images, collected directly from the field. A pair of multilayer perceptron neural network analyzes the digital images, using data from both RGB and HSL color models, and classifies each image pixel. One neural network is responsible for the identification of healthy regions of the tomato leaf, while the other identifies the injured regions. The outputs of both networks are combined to generate the final classification of each pixel from the image and the pixel classes are used to repaint the original tomato images by using a color representation that highlights the injuries on the plant. The new images will have only green, red or black pixels, if they came from healthy or injured portions of the leaf, or from the background of the image, respectively. The system presented an accuracy of 97% in detection and estimation of the level of damage on the tomato leaves caused by late blight.

Keywords: artificial neural networks, digital image processing, pattern recognition, phytosanitary

Procedia PDF Downloads 327
5549 Computational Fluid Dynamics Model of Various Types of Rocket Engine Nozzles

Authors: Konrad Pietrykowski, Michal Bialy, Pawel Karpinski, Radoslaw Maczka

Abstract:

The nozzle is an element of the rocket engine in which the conversion of the potential energy of gases generated during combustion into the kinetic energy of the gas stream takes place. The design parameters of the nozzle have a decisive influence on the ballistic characteristics of the engine. Designing a nozzle assembly is, therefore, one of the most responsible stages in developing a rocket engine design. The paper presents the results of the simulation of three types of rocket propulsion nozzles. Calculations were made using CFD (Computational Fluid Dynamics) in ANSYS Fluent software. The next types of nozzles differ in shape. The analysis was made of a conical nozzle, a bell type nozzle with a conical supersonic part and a bell type nozzle. Calculation results are presented in the form of pressure, velocity and kinetic energy distributions of turbulence in the longitudinal section. The courses of these values along the nozzles are also presented. The results show that the cone nozzle generates strong turbulence in the critical section. Which negatively affect the flow of the working medium. In the case of a bell nozzle, the transformation of the wall caused the elimination of flow disturbances in the critical section. This reduces the probability of waves forming before or after the trailing edge. The most sophisticated construction is the bell type nozzle. It allows you to maximize performance without adding extra weight. The bell type nozzle can be used as a starter and auxiliary engine nozzle due to its advantages. The project/research was financed in the framework of the project Lublin University of Technology-Regional Excellence Initiative, funded by the Polish Ministry of Science and Higher Education (contract no. 030/RID/2018/19).

Keywords: computational fluid dynamics, nozzle, rocket engine, supersonic flow

Procedia PDF Downloads 158
5548 Despiking of Turbulent Flow Data in Gravel Bed Stream

Authors: Ratul Das

Abstract:

The present experimental study insights the decontamination of instantaneous velocity fluctuations captured by Acoustic Doppler Velocimeter (ADV) in gravel-bed streams to ascertain near-bed turbulence for low Reynolds number. The interference between incidental and reflected pulses produce spikes in the ADV data especially in the near-bed flow zone and therefore filtering the data are very essential. Nortek’s Vectrino four-receiver ADV probe was used to capture the instantaneous three-dimensional velocity fluctuations over a non-cohesive bed. A spike removal algorithm based on the acceleration threshold method was applied to note the bed roughness and its influence on velocity fluctuations and velocity power spectra in the carrier fluid. The velocity power spectra of despiked signals with a best combination of velocity threshold (VT) and acceleration threshold (AT) are proposed which ascertained velocity power spectra a satisfactory fit with the Kolmogorov “–5/3 scaling-law” in the inertial sub-range. Also, velocity distributions below the roughness crest level fairly follows a third-degree polynomial series.

Keywords: acoustic doppler velocimeter, gravel-bed, spike removal, reynolds shear stress, near-bed turbulence, velocity power spectra

Procedia PDF Downloads 299
5547 Fusion Models for Cyber Threat Defense: Integrating Clustering, Random Forests, and Support Vector Machines to Against Windows Malware

Authors: Azita Ramezani, Atousa Ramezani

Abstract:

In the ever-escalating landscape of windows malware the necessity for pioneering defense strategies turns into undeniable this study introduces an avant-garde approach fusing the capabilities of clustering random forests and support vector machines SVM to combat the intricate web of cyber threats our fusion model triumphs with a staggering accuracy of 98.67 and an equally formidable f1 score of 98.68 a testament to its effectiveness in the realm of windows malware defense by deciphering the intricate patterns within malicious code our model not only raises the bar for detection precision but also redefines the paradigm of cybersecurity preparedness this breakthrough underscores the potential embedded in the fusion of diverse analytical methodologies and signals a paradigm shift in fortifying against the relentless evolution of windows malicious threats as we traverse through the dynamic cybersecurity terrain this research serves as a beacon illuminating the path toward a resilient future where innovative fusion models stand at the forefront of cyber threat defense.

Keywords: fusion models, cyber threat defense, windows malware, clustering, random forests, support vector machines (SVM), accuracy, f1-score, cybersecurity, malicious code detection

Procedia PDF Downloads 71
5546 Atypical Clinical Presentation of Wallenberg Syndrome from Acute Right Lateral Medullary Infarct in a 37 Year Old Female

Authors: Sweta Das

Abstract:

This case report highlights the atypical clinical manifestation of ipsilateral head, neck, shoulder, and eye pain with erythema and edema of right eyelid and conjunctiva, along with typical presentation of right sided Horner’s syndrome in a 37-year-old female, who was correctly diagnosed with Wallenberg syndrome due to collaborative effort from optometry, primary care, emergency, and neurology specialties in medicine. Horner’s syndrome is present in 75% of patients with Wallenberg syndrome. Given that patients with Wallenberg syndrome often first present to the Emergency Department with a vast variety of non-specific symptoms, and a normal MRI, a delayed diagnosis is common. Therefore, a collaborative effort between emergency department, optometry, primary care, and neurology is essential in correctly diagnosing Wallenberg’s syndrome in a timely manner.

Keywords: horner's syndrome, stroke, wallenberg syndrome, lateropulsion of eyes

Procedia PDF Downloads 61
5545 Bridging Urban Planning and Environmental Conservation: A Regional Analysis of Northern and Central Kolkata

Authors: Tanmay Bisen, Aastha Shayla

Abstract:

This study introduces an advanced approach to tree canopy detection in urban environments and a regional analysis of Northern and Central Kolkata that delves into the intricate relationship between urban development and environmental conservation. Leveraging high-resolution drone imagery from diverse urban green spaces in Kolkata, we fine-tuned the deep forest model to enhance its precision and accuracy. Our results, characterized by an impressive Intersection over Union (IoU) score of 0.90 and a mean average precision (mAP) of 0.87, underscore the model's robustness in detecting and classifying tree crowns amidst the complexities of aerial imagery. This research not only emphasizes the importance of model customization for specific datasets but also highlights the potential of drone-based remote sensing in urban forestry studies. The study investigates the spatial distribution, density, and environmental impact of trees in Northern and Central Kolkata. The findings underscore the significance of urban green spaces in met-ropolitan cities, emphasizing the need for sustainable urban planning that integrates green infrastructure for ecological balance and human well-being.

Keywords: urban greenery, advanced spatial distribution analysis, drone imagery, deep learning, tree detection

Procedia PDF Downloads 57
5544 A General Variable Neighborhood Search Algorithm to Minimize Makespan of the Distributed Permutation Flowshop Scheduling Problem

Authors: G. M. Komaki, S. Mobin, E. Teymourian, S. Sheikh

Abstract:

This paper addresses minimizing the makespan of the distributed permutation flow shop scheduling problem. In this problem, there are several parallel identical factories or flowshops each with series of similar machines. Each job should be allocated to one of the factories and all of the operations of the jobs should be performed in the allocated factory. This problem has recently gained attention and due to NP-Hard nature of the problem, metaheuristic algorithms have been proposed to tackle it. Majority of the proposed algorithms require large computational time which is the main drawback. In this study, a general variable neighborhood search algorithm (GVNS) is proposed where several time-saving schemes have been incorporated into it. Also, the GVNS uses the sophisticated method to change the shaking procedure or perturbation depending on the progress of the incumbent solution to prevent stagnation of the search. The performance of the proposed algorithm is compared to the state-of-the-art algorithms based on standard benchmark instances.

Keywords: distributed permutation flow shop, scheduling, makespan, general variable neighborhood search algorithm

Procedia PDF Downloads 354
5543 Analysing the Interactive Effects of Factors Influencing Sand Production on Drawdown Time in High Viscosity Reservoirs

Authors: Gerald Gwamba, Bo Zhou, Yajun Song, Dong Changyin

Abstract:

The challenges that sand production presents to the oil and gas industry, particularly while working in poorly consolidated reservoirs, cannot be overstated. From restricting production to blocking production tubing, sand production increases the costs associated with production as it elevates the cost of servicing production equipment over time. Production in reservoirs that present with high viscosities, flow rate, cementation, clay content as well as fine sand contents is even more complex and challenging. As opposed to the one-factor at a-time testing, investigating the interactive effects arising from a combination of several factors offers increased reliability of results as well as representation of actual field conditions. It is thus paramount to investigate the conditions leading to the onset of sanding during production to ensure the future sustainability of hydrocarbon production operations under viscous conditions. We adopt the Design of Experiments (DOE) to analyse, using Taguchi factorial designs, the most significant interactive effects of sanding. We propose an optimized regression model to predict the drawdown time at sand production. The results obtained underscore that reservoirs characterized by varying (high and low) levels of viscosity, flow rate, cementation, clay, and fine sand content have a resulting impact on sand production. The only significant interactive effect recorded arises from the interaction between BD (fine sand content and flow rate), while the main effects included fluid viscosity and cementation, with percentage significances recorded as 31.3%, 37.76%, and 30.94%, respectively. The drawdown time model presented could be useful for predicting the time to reach the maximum drawdown pressure under viscous conditions during the onset of sand production.

Keywords: factorial designs, DOE optimization, sand production prediction, drawdown time, regression model

Procedia PDF Downloads 152
5542 Effects of Post-sampling Conditions on Ethanol and Ethyl Glucuronide Formation in the Urine of Diabetes Patients

Authors: Hussam Ashwi, Magbool Oraiby, Ali Muyidi, Hamad Al-Oufi, Mohammed Al-Oufi, Adel Al-Juhani, Salman Al-Zemaa, Saeed Al-Shahrani, Amal Abuallah, Wedad Sherwani, Mohammed Alattas, Ibraheem Attafi

Abstract:

Ethanol must be accurately identified and quantified to establish their use and contribution in criminal cases and forensic medicine. In some situations, it may be necessary to reanalyze an old specimen; therefore, it is essential to comprehend the effect of storage conditions and how long the result of a reanalyzed specimen can be reliable and reproducible. Additionally, ethanol can be produced via multiple in vivo and in vitro processes, particularly in diabetic patients, and the results can be affected by storage conditions and time. In order to distinguish between in vivo and in vitro alcohol generation in diabetes patient urine samples, various factors should be considered. This study identifies and quantifies ethanol and EtG in diabetic patients' urine samples stored in two different settings over time. Ethanol levels were determined using gas chromatography-headspace (GC-HS), and ethyl glucuronide (EtG) levels were determined using the immunoassay (RANDOX) technique. Ten urine specimens were collected and placed in a standard container. Each specimen was separated into two containers. The specimens were divided into two groups: those kept at room temperature (25 °C) and those kept cold (2-8 °C). Ethanol and EtG levels were determined serially over a two-week period. Initial results showed that none of the specimens tested positive for ethanol or EtG. At room temperature (15-25 °C), 7 and 14 days after the sample was taken, the average concentration of ethanol increased from 1.7 mg/dL to 2 mg/dL, and the average concentration of EtG increased from 108 ng/mL to 186 ng/mL. At 2–8 °C, the average ethanol concentration was 0.4 and 0.5 mg/dL, and the average EtG concentration was 138 and 124 ng/mL seven and fourteen days after the sample was collected, respectively. When ethanol and EtG levels were determined 14 days post collection, they were considerably lower than when stored at room temperature. A considerable increase in EtG concentrations (14-day range 0–186 ng/mL) is produced during room-temperature storage, although negative initial results for all specimens. Because EtG might be produced after a sampling collection, it is not a reliable indicator of recent alcohol consumption. Given the possibility of misleading EtG results due to in vitro EtG production in the urine of diabetic patients.

Keywords: ethyl glucuronide, ethanol, forensic toxicology, diabetic

Procedia PDF Downloads 123
5541 An Inviscid Compressible Flow Solver Based on Unstructured OpenFOAM Mesh Format

Authors: Utkan Caliskan

Abstract:

Two types of numerical codes based on finite volume method are developed in order to solve compressible Euler equations to simulate the flow through forward facing step channel. Both algorithms have AUSM+- up (Advection Upstream Splitting Method) scheme for flux splitting and two-stage Runge-Kutta scheme for time stepping. In this study, the flux calculations differentiate between the algorithm based on OpenFOAM mesh format which is called 'face-based' algorithm and the basic algorithm which is called 'element-based' algorithm. The face-based algorithm avoids redundant flux computations and also is more flexible with hybrid grids. Moreover, some of OpenFOAM’s preprocessing utilities can be used on the mesh. Parallelization of the face based algorithm for which atomic operations are needed due to the shared memory model, is also presented. For several mesh sizes, 2.13x speed up is obtained with face-based approach over the element-based approach.

Keywords: cell centered finite volume method, compressible Euler equations, OpenFOAM mesh format, OpenMP

Procedia PDF Downloads 319
5540 Magnetohydrodynamics (MHD) Boundary Layer Flow Past A Stretching Plate with Heat Transfer and Viscous Dissipation

Authors: Jiya Mohammed, Tsadu Shuaib, Yusuf Abdulhakeem

Abstract:

The research work focuses on the cases of MHD boundary layer flow past a stretching plate with heat transfer and viscous dissipation. The non-linear of momentum and energy equation are transform into ordinary differential equation by using similarity transformation, the resulting equation are solved using Adomian Decomposition Method (ADM). An attempt has been made to show the potentials and wide range application of the Adomian decomposition method in the comparison with the previous one in solving heat transfer problems. The Pade approximates value (η= 11[11, 11]) is use on the difficulty at infinity. The results are compared by numerical technique method. A vivid conclusion can be drawn from the results that ADM provides highly precise numerical solution for non-linear differential equations. The result where accurate especially for η ≤ 4, a general equating terms of Eckert number (Ec), Prandtl number (Pr) and magnetic parameter ( ) is derived which was used to investigate velocity and temperature profiles in boundary layer.

Keywords: MHD, Adomian decomposition, boundary layer, viscous dissipation

Procedia PDF Downloads 551
5539 Efficient GIS Based Public Health System for Disease Prevention

Authors: K. M. G. T. R. Waidyarathna, S. M. Vidanagamachchi

Abstract:

Public Health System exists in Sri Lanka has a satisfactory complete information flow when compared to other systems in developing countries. The availability of a good health information system contributed immensely to achieve health indices that are in line with the developed countries like US and UK. The health information flow at the moment is completely paper based. In Sri Lanka, the fields like banking, accounting and engineering have incorporated information and communication technology to the same extent that can be observed in any other country. The field of medicine has behind those fields throughout the world mainly due to its complexity, issues like privacy, confidentially and lack of people with knowledge in both fields of Information Technology (IT) and Medicine. Sri Lanka’s situation is much worse and the gap is rapidly increasing with huge IT initiatives by private-public partnerships in all other countries. The major goal of the framework is to support minimizing the spreading diseases. To achieve that a web based framework should be implemented for this application domain with web mapping. The aim of this GIS based public health system is a secure, flexible, easy to maintain environment for creating and maintaining public health records and easy to interact with relevant parties.

Keywords: DHIS2, GIS, public health, Sri Lanka

Procedia PDF Downloads 564
5538 On the Use of Machine Learning for Tamper Detection

Authors: Basel Halak, Christian Hall, Syed Abdul Father, Nelson Chow Wai Kit, Ruwaydah Widaad Raymode

Abstract:

The attack surface on computing devices is becoming very sophisticated, driven by the sheer increase of interconnected devices, reaching 50B in 2025, which makes it easier for adversaries to have direct access and perform well-known physical attacks. The impact of increased security vulnerability of electronic systems is exacerbated for devices that are part of the critical infrastructure or those used in military applications, where the likelihood of being targeted is very high. This continuously evolving landscape of security threats calls for a new generation of defense methods that are equally effective and adaptive. This paper proposes an intelligent defense mechanism to protect from physical tampering, it consists of a tamper detection system enhanced with machine learning capabilities, which allows it to recognize normal operating conditions, classify known physical attacks and identify new types of malicious behaviors. A prototype of the proposed system has been implemented, and its functionality has been successfully verified for two types of normal operating conditions and further four forms of physical attacks. In addition, a systematic threat modeling analysis and security validation was carried out, which indicated the proposed solution provides better protection against including information leakage, loss of data, and disruption of operation.

Keywords: anti-tamper, hardware, machine learning, physical security, embedded devices, ioT

Procedia PDF Downloads 153
5537 Re-Evaluation of Field X Located in Northern Lake Albert Basin to Refine the Structural Interpretation

Authors: Calorine Twebaze, Jesca Balinga

Abstract:

Field X is located on the Eastern shores of L. Albert, Uganda, on the rift flank where the gross sedimentary fill is typically less than 2,000m. The field was discovered in 2006 and encountered about 20.4m of net pay across three (3) stratigraphic intervals within the discovery well. The field covers an area of 3 km2, with the structural configuration comprising a 3-way dip-closed hanging wall anticline that seals against the basement to the southeast along the bounding fault. Field X had been mapped on reprocessed 3D seismic data, which was originally acquired in 2007 and reprocessed in 2013. The seismic data quality is good across the field, and reprocessing work reduced the uncertainty in the location of the bounding fault and enhanced the lateral continuity of reservoir reflectors. The current study was a re-evaluation of Field X to refine fault interpretation and understand the structural uncertainties associated with the field. The seismic data, and three (3) wells datasets were used during the study. The evaluation followed standard workflows using Petrel software and structural attribute analysis. The process spanned from seismic- -well tie, structural interpretation, and structural uncertainty analysis. Analysis of three (3) well ties generated for the 3 wells provided a geophysical interpretation that was consistent with geological picks. The generated time-depth curves showed a general increase in velocity with burial depth. However, separation in curve trends observed below 1100m was mainly attributed to minimal lateral variation in velocity between the wells. In addition to Attribute analysis, three velocity modeling approaches were evaluated, including the Time-Depth Curve, Vo+ kZ, and Average Velocity Method. The generated models were calibrated at well locations using well tops to obtain the best velocity model for Field X. The Time-depth method resulted in more reliable depth surfaces with good structural coherence between the TWT and depth maps with minimal error at well locations of 2 to 5m. Both the NNE-SSW rift border fault and minor faults in the existing interpretation were reevaluated. However, the new interpretation delineated an E-W trending fault in the northern part of the field that had not been interpreted before. The fault was interpreted at all stratigraphic levels and thus propagates from the basement to the surface and is an active fault today. It was also noted that the entire field is less faulted with more faults in the deeper part of the field. The major structural uncertainties defined included 1) The time horizons due to reduced data quality, especially in the deeper parts of the structure, an error equal to one-third of the reflection time thickness was assumed, 2) Check shot analysis showed varying velocities within the wells thus varying depth values for each well, and 3) Very few average velocity points due to limited wells produced a pessimistic average Velocity model.

Keywords: 3D seismic data interpretation, structural uncertainties, attribute analysis, velocity modelling approaches

Procedia PDF Downloads 59
5536 Multiphysic Coupling Between Hypersonc Reactive Flow and Thermal Structural Analysis with Ablation for TPS of Space Lunchers

Authors: Margarita Dufresne

Abstract:

This study devoted to development TPS for small space re-usable launchers. We have used SIRIUS design for S1 prototype. Multiphysics coupling for hypersonic reactive flow and thermos-structural analysis with and without ablation is provided by -CCM+ and COMSOL Multiphysics and FASTRAN and ACE+. Flow around hypersonic flight vehicles is the interaction of multiple shocks and the interaction of shocks with boundary layers. These interactions can have a very strong impact on the aeroheating experienced by the flight vehicle. A real gas implies the existence of a gas in equilibrium, non-equilibrium. Mach number ranged from 5 to 10 for first stage flight.The goals of this effort are to provide validation of the iterative coupling of hypersonic physics models in STAR-CCM+ and FASTRAN with COMSOL Multiphysics and ACE+. COMSOL Multiphysics and ACE+ are used for thermal structure analysis to simulate Conjugate Heat Transfer, with Conduction, Free Convection and Radiation to simulate Heat Flux from hypersonic flow. The reactive simulations involve an air chemical model of five species: N, N2, NO, O and O2. Seventeen chemical reactions, involving dissociation and recombination probabilities calculation include in the Dunn/Kang mechanism. Forward reaction rate coefficients based on a modified Arrhenius equation are computed for each reaction. The algorithms employed to solve the reactive equations used the second-order numerical scheme is obtained by a “MUSCL” (Monotone Upstream-cantered Schemes for Conservation Laws) extrapolation process in the structured case. Coupled inviscid flux: AUSM+ flux-vector splitting The MUSCL third-order scheme in STAR-CCM+ provides third-order spatial accuracy, except in the vicinity of strong shocks, where, due to limiting, the spatial accuracy is reduced to second-order and provides improved (i.e., reduced) dissipation compared to the second-order discretization scheme. initial unstructured mesh is refined made using this initial pressure gradient technique for the shock/shock interaction test case. The suggested by NASA turbulence models are the K-Omega SST with a1 = 0.355 and QCR (quadratic) as the constitutive option. Specified k and omega explicitly in initial conditions and in regions – k = 1E-6 *Uinf^2 and omega = 5*Uinf/ (mean aerodynamic chord or characteristic length). We put into practice modelling tips for hypersonic flow as automatic coupled solver, adaptative mesh refinement to capture and refine shock front, using advancing Layer Mesher and larger prism layer thickness to capture shock front on blunt surfaces. The temperature range from 300K to 30 000 K and pressure between 1e-4 and 100 atm. FASTRAN and ACE+ are coupled to provide high-fidelity solution for hot hypersonic reactive flow and Conjugate Heat Transfer. The results of both approaches meet the CIRCA wind tunnel results.

Keywords: hypersonic, first stage, high speed compressible flow, shock wave, aerodynamic heating, conugate heat transfer, conduction, free convection, radiation, fastran, ace+, comsol multiphysics, star-ccm+, thermal protection system (tps), space launcher, wind tunnel

Procedia PDF Downloads 71
5535 High-Resolution ECG Automated Analysis and Diagnosis

Authors: Ayad Dalloo, Sulaf Dalloo

Abstract:

Electrocardiogram (ECG) recording is prone to complications, on analysis by physicians, due to noise and artifacts, thus creating ambiguity leading to possible error of diagnosis. Such drawbacks may be overcome with the advent of high resolution Methods, such as Discrete Wavelet Analysis and Digital Signal Processing (DSP) techniques. This ECG signal analysis is implemented in three stages: ECG preprocessing, features extraction and classification with the aim of realizing high resolution ECG diagnosis and improved detection of abnormal conditions in the heart. The preprocessing stage involves removing spurious artifacts (noise), due to such factors as muscle contraction, motion, respiration, etc. ECG features are extracted by applying DSP and suggested sloping method techniques. These measured features represent peak amplitude values and intervals of P, Q, R, S, R’, and T waves on ECG, and other features such as ST elevation, QRS width, heart rate, electrical axis, QR and QT intervals. The classification is preformed using these extracted features and the criteria for cardiovascular diseases. The ECG diagnostic system is successfully applied to 12-lead ECG recordings for 12 cases. The system is provided with information to enable it diagnoses 15 different diseases. Physician’s and computer’s diagnoses are compared with 90% agreement, with respect to physician diagnosis, and the time taken for diagnosis is 2 seconds. All of these operations are programmed in Matlab environment.

Keywords: ECG diagnostic system, QRS detection, ECG baseline removal, cardiovascular diseases

Procedia PDF Downloads 297
5534 Steel Bridge Coating Inspection Using Image Processing with Neural Network Approach

Authors: Ahmed Elbeheri, Tarek Zayed

Abstract:

Steel bridges deterioration has been one of the problems in North America for the last years. Steel bridges deterioration mainly attributed to the difficult weather conditions. Steel bridges suffer fatigue cracks and corrosion, which necessitate immediate inspection. Visual inspection is the most common technique for steel bridges inspection, but it depends on the inspector experience, conditions, and work environment. So many Non-destructive Evaluation (NDE) models have been developed use Non-destructive technologies to be more accurate, reliable and non-human dependent. Non-destructive techniques such as The Eddy Current Method, The Radiographic Method (RT), Ultra-Sonic Method (UT), Infra-red thermography and Laser technology have been used. Digital Image processing will be used for Corrosion detection as an Alternative for visual inspection. Different models had used grey-level and colored digital image for processing. However, color image proved to be better as it uses the color of the rust to distinguish it from the different backgrounds. The detection of the rust is an important process as it’s the first warning for the corrosion and a sign of coating erosion. To decide which is the steel element to be repainted and how urgent it is the percentage of rust should be calculated. In this paper, an image processing approach will be developed to detect corrosion and its severity. Two models were developed 1st to detect rust and 2nd to detect rust percentage.

Keywords: steel bridge, bridge inspection, steel corrosion, image processing

Procedia PDF Downloads 306
5533 Biophysical Analysis of the Interaction of Polymeric Nanoparticles with Biomimetic Models of the Lung Surfactant

Authors: Weiam Daear, Patrick Lai, Elmar Prenner

Abstract:

The human body offers many avenues that could be used for drug delivery. The pulmonary route, which is delivered through the lungs, presents many advantages that have sparked interested in the field. These advantages include; 1) direct access to the lungs and the large surface area it provides, and 2) close proximity to the blood circulation. The air-blood barrier of the alveoli is about 500 nm thick. The air-blood barrier consist of a monolayer of lipids and few proteins called the lung surfactant and cells. This monolayer consists of ~90% lipids and ~10% proteins that are produced by the alveolar epithelial cells. The two major lipid classes constitutes of various saturation and chain length of phosphatidylcholine (PC) and phosphatidylglycerol (PG) representing 80% of total lipid component. The major role of the lung surfactant monolayer is to reduce surface tension experienced during breathing cycles in order to prevent lung collapse. In terms of the pulmonary drug delivery route, drugs pass through various parts of the respiratory system before reaching the alveoli. It is at this location that the lung surfactant functions as the air-blood barrier for drugs. As the field of nanomedicine advances, the use of nanoparticles (NPs) as drug delivery vehicles is becoming very important. This is due to the advantages NPs provide with their large surface area and potential specific targeting. Therefore, studying the interaction of NPs with lung surfactant and whether they affect its stability becomes very essential. The aim of this research is to develop a biomimetic model of the human lung surfactant followed by a biophysical analysis of the interaction of polymeric NPs. This biomimetic model will function as a fast initial mode of testing for whether NPs affect the stability of the human lung surfactant. The model developed thus far is an 8-component lipid system that contains major PC and PG lipids. Recently, a custom made 16:0/16:1 PC and PG lipids were added to the model system. In the human lung surfactant, these lipids constitute 16% of the total lipid component. According to the author’s knowledge, there is not much monolayer data on the biophysical analysis of the 16:0/16:1 lipids, therefore more analysis will be discussed here. Biophysical techniques such as the Langmuir Trough is used for stability measurements which monitors changes to a monolayer's surface pressure upon NP interaction. Furthermore, Brewster Angle Microscopy (BAM) employed to visualize changes to the lateral domain organization. Results show preferential interactions of NPs with different lipid groups that is also dependent on the monolayer fluidity. Furthermore, results show that the film stability upon compression is unaffected, but there are significant changes in the lateral domain organization of the lung surfactant upon NP addition. This research is significant in the field of pulmonary drug delivery. It is shown that NPs within a certain size range are safe for the pulmonary route, but little is known about the mode of interaction of those polymeric NPs. Moreover, this work will provide additional information about the nanotoxicology of NPs tested.

Keywords: Brewster angle microscopy, lipids, lung surfactant, nanoparticles

Procedia PDF Downloads 180
5532 The Misuse of Free Cash and Earnings Management: An Analysis of the Extent to Which Board Tenure Mitigates Earnings Management

Authors: Michael McCann

Abstract:

Managerial theories propose that, in joint stock companies, executives may be tempted to waste excess free cash on unprofitable projects to keep control of resources. In order to conceal their projects' poor performance, they may seek to engage in earnings management. On the one hand, managers may manipulate earnings upwards in order to post ‘good’ performances and safeguard their position. On the other, since managers pursuit of unrewarding investments are likely to lead to low long-term profitability, managers will use negative accruals to reduce current year’s earnings, smoothing earnings over time in order to conceal the negative effects. Agency models argue that boards of directors are delegated by shareholders to ensure that companies are governed properly. Part of that responsibility is ensuring the reliability of financial information. Analyses of the impact of board characteristics, particularly board independence on the misuse of free cash flow and earnings management finds conflicting evidence. However, existing characterizations of board independence do not account for such directors gaining firm-specific knowledge over time, influencing their monitoring ability. Further, there is little analysis of the influence of the relative experience of independent directors and executives on decisions surrounding the use of free cash. This paper contributes to this literature regarding the heterogeneous characteristics of boards by investigating the influence of independent director tenure on earnings management and the relative tenures of independent directors and Chief Executives. A balanced panel dataset comprising 51 companies across 11 annual periods from 2005 to 2015 is used for the analysis. In each annual period, firms were classified as conducting earnings management if they had discretionary accruals in the bottom quartile (downwards) and top quartile (upwards) of the distributed values for the sample. Logistical regressions were conducted to determine the marginal impact of independent board tenure and a number of control variables on the probability of conducting earnings management. The findings indicate that both absolute and relative measures of board independence and experience do not have a significant impact on the likelihood of earnings management. It is the level of free cash flow which is the major influence on the probability of earnings management. Higher free cash flow increases the probability of earnings management significantly. The research also investigates whether board monitoring of earnings management is contingent on the level of free cash flow. However, the results suggest that board monitoring is not amplified when free cash flow is higher. This suggests that the extent of earnings management in companies is determined by a range of company, industry and situation-specific factors.

Keywords: corporate governance, boards of directors, agency theory, earnings management

Procedia PDF Downloads 233