Search results for: prediction model accuracy
9173 Thermal Image Segmentation Method for Stratification of Freezing Temperatures
Authors: Azam Fazelpour, Saeed R. Dehghani, Vlastimil Masek, Yuri S. Muzychka
Abstract:
The study uses an image analysis technique employing thermal imaging to measure the percentage of areas with various temperatures on a freezing surface. An image segmentation method using threshold values is applied to a sequence of image recording the freezing process. The phenomenon is transient and temperatures vary fast to reach the freezing point and complete the freezing process. Freezing salt water is subjected to the salt rejection that makes the freezing point dynamic and dependent on the salinity at the phase interface. For a specific area of freezing, nucleation starts from one side and end to another side, which causes a dynamic and transient temperature in that area. Thermal cameras are able to reveal a difference in temperature due to their sensitivity to infrared radiance. Using Experimental setup, a video is recorded by a thermal camera to monitor radiance and temperatures during the freezing process. Image processing techniques are applied to all frames to detect and classify temperatures on the surface. Image processing segmentation method is used to find contours with same temperatures on the icing surface. Each segment is obtained using the temperature range appeared in the image and correspond pixel values in the image. Using the contours extracted from image and camera parameters, stratified areas with different temperatures are calculated. To observe temperature contours on the icing surface using the thermal camera, the salt water sample is dropped on a cold surface with the temperature of -20°C. A thermal video is recorded for 2 minutes to observe the temperature field. Examining the results obtained by the method and the experimental observations verifies the accuracy and applicability of the method.Keywords: ice contour boundary, image processing, image segmentation, salt ice, thermal image
Procedia PDF Downloads 3249172 Modeling of Thermo Acoustic Emission Memory Effect in Rocks of Varying Textures
Authors: Vladimir Vinnikov
Abstract:
The paper proposes a model of an inhomogeneous rock mass with initially random distribution of microcracks on mineral grain boundaries. It describes the behavior of cracks in a medium under the effect of thermal field, the medium heated instantaneously to a predetermined temperature. Crack growth occurs according to the concept of fracture mechanics provided that the stress intensity factor K exceeds the critical value of Kc. The modeling of thermally induced acoustic emission memory effects is based on the assumption that every event of crack nucleation or crack growth caused by heating is accompanied with a single acoustic emission event. Parameters of the thermally induced acoustic emission memory effect produced by cyclic heating and cooling (with the temperature amplitude increasing from cycle to cycle) were calculated for several rock texture types (massive, banded, and disseminated). The study substantiates the adaptation of the proposed model to humidity interference with the thermally induced acoustic emission memory effect. The influence of humidity on the thermally induced acoustic emission memory effect in quasi-homogeneous and banded rocks is estimated. It is shown that such modeling allows the structure and texture of rocks to be taken into account and the influence of interference factors on the distinctness of the thermally induced acoustic emission memory effect to be estimated. The numerical modeling can be used to obtain information about the thermal impacts on rocks in the past and determine the degree of rock disturbance by means of non-destructive testing.Keywords: crack growth, cyclic heating and cooling, rock texture, thermo acoustic emission memory effect
Procedia PDF Downloads 2749171 Aerosol Radiative Forcing Over Indian Subcontinent for 2000-2021 Using Satellite Observations
Authors: Shreya Srivastava, Sushovan Ghosh, Sagnik Dey
Abstract:
Aerosols directly affect Earth’s radiation budget by scattering and absorbing incoming solar radiation and outgoing terrestrial radiation. While the uncertainty in aerosol radiative forcing (ARF) has decreased over the years, it is still higher than that of greenhouse gas forcing, particularly in the South Asian region, due to high heterogeneity in their chemical properties. Understanding the Spatio-temporal heterogeneity of aerosol composition is critical in improving climate prediction. Studies using satellite data, in-situ and aircraft measurements, and models have investigated the Spatio-temporal variability of aerosol characteristics. In this study, we have taken aerosol data from Multi-angle Imaging Spectro-Radiometer (MISR) level-2 version 23 aerosol products retrieved at 4.4 km and radiation data from Clouds and the Earth’s Radiant Energy System (CERES, spatial resolution=1ox1o) for 21 years (2000-2021) over the Indian subcontinent. MISR aerosol product includes size and shapes segregated aerosol optical depth (AOD), Angstrom exponent (AE), and single scattering albedo (SSA). Additionally, 74 aerosol mixtures are included in version 23 data that is used for aerosol speciation. We have seasonally mapped aerosol optical and microphysical properties from MISR for India at quarter degrees resolution. Results show strong Spatio-temporal variability, with a constant higher value of AOD for the Indo-Gangetic Plain (IGP). The contribution of small-size particles is higher throughout the year, spatially during winter months. SSA is found to be overestimated where absorbing particles are present. The climatological map of short wave (SW) ARF at the top of the atmosphere (TOA) shows a strong cooling except in only a few places (values ranging from +2.5o to -22.5o). Cooling due to aerosols is higher in the absence of clouds. Higher negative values of ARF are found over the IGP region, given the high aerosol concentration above the region. Surface ARF values are everywhere negative for our study domain, with higher values in clear conditions. The results strongly correlate with AOD from MISR and ARF from CERES.Keywords: aerosol Radiative forcing (ARF), aerosol composition, single scattering albedo (SSA), CERES
Procedia PDF Downloads 579170 Finite Element Analysis of the Lumbar Spine after Unilateral and Bilateral Laminotomies and Laminectomy
Authors: Chih-Hsien Chen, Yi-Hung Ho, Chih-Wei Wang, Chih-Wei Chang, Yen-Nien Chen, Chih-Han Chang, Chun-Ting Li
Abstract:
Laminotomy is a spinal decompression surgery compatible with a minimally invasive approach. However, the unilateral laminotomy for bilateral side decompression leads to more perioperative complications than the bilateral laminotomy. Although the unilateral laminotomy removes the least bone tissue among the spinal decompression surgeries, the difference of spinal stability between unilateral and bilateral laminotomy and laminectomy is rarely investigated. This study aims to compare the biomechanical effects of unilateral and bilateral laminotomy and laminectomy on the lumbar spine by finite element (FE) simulation. A three-dimensional FE model of the lumbar spine (L1–L5) was constructed with the vertebral body, discs, and ligaments, as well as the sacrum was constructed. Three different surgical methods, namely unilateral laminotomy, bilateral laminotomy and laminectomy, at L3–L4 and L4–L5 were considered. Partial pedicle and entire ligamentum flavum were removed to simulate bilateral decompression in laminotomy. The entire lamina and spinal processes from the lower L3 to upper L5 were detached in the laminectomy model. Then, four kinds of loadings, namely flexion, extension, lateral bending and rotation, were applied on the lumbar with various decompression conditions. The results indicated that the bilateral and unilateral laminotomy both increased the range of motion (ROM) compared with intact lumbar, while the laminectomy increased more ROM than both laminotomy did. The difference of ROM between the bilateral and unilateral laminotomy was very minor. Furthermore, bilateral laminotomy demonstrated similar poster element stress with unilateral laminotomy. Unilateral and bilateral laminotomy are equally suggested to bilateral decompression of lumbar spine with minimally invasive technique because limited effect was aroused due to more bone remove in the bilateral laminotomy on the lumbar stability. Furthermore, laminectomy is the last option for lumbar decompression.Keywords: minimally invasive technique, lumbar decompression, laminotomy, laminectomy, finite element method
Procedia PDF Downloads 1919169 Factors Affecting the Adoption of Cloud Business Intelligence among Healthcare Sector: A Case Study of Saudi Arabia
Authors: Raed Alsufyani, Hissam Tawfik, Victor Chang, Muthu Ramachandran
Abstract:
This study investigates the factors that influence the decision by players in the healthcare sector to embrace Cloud Business Intelligence Technology with a focus on healthcare organizations in Saudi Arabia. To bring this matter into perspective, this study primarily considers the Technology-Organization-Environment (TOE) framework and the Human Organization-Technology (HOT) fit model. A survey was hypothetically designed based on literature review and was carried out online. Quantitative data obtained was processed from descriptive and one-way frequency statistics to inferential and regression analysis. Data were analysed to establish factors that influence the decision to adopt Cloud Business intelligence technology in the healthcare sector. The implication of the identified factors was measured, and all assumptions were tested. 66.70% of participants in healthcare organization backed the intention to adopt cloud business intelligence system. 99.4% of these participants considered security concerns and privacy risk have been the most significant factors in the adoption of cloud Business Intelligence (CBI) system. Through regression analysis hypothesis testing point that usefulness, service quality, relative advantage, IT infrastructure preparedness, organization structure; vendor support, perceived technical competence, government support, and top management support positively and significantly influence the adoption of (CBI) system. The paper presents quantitative phase that is a part of an on-going project. The project will be based on the consequences learned from this study.Keywords: cloud computing, business intelligence, HOT-fit model, TOE, healthcare and innovation adoption
Procedia PDF Downloads 1749168 Envy and Schadenfreude Domains in a Model of Neurodegeneration
Authors: Hernando Santamaría-García, Sandra Báez, Pablo Reyes, José Santamaría-García, Diana Matallana, Adolfo García, Agustín Ibañez
Abstract:
The study of moral emotions (i.e., Schadenfreude and envy) is critical to understand the ecological complexity of everyday interactions between cognitive, affective, and social cognition processes. Most previous studies in this area have used correlational imaging techniques and framed Schadenfreude and envy as monolithic domains. Here, we profit from a relevant neurodegeneration model to disentangle the brain regions engaged in three dimensions of Schadenfreude and envy: deservingness, morality, and legality. We tested 20 patients with behavioral variant frontotemporal dementia (bvFTD), 24 patients with Alzheimer’s disease (AD), as a contrastive neurodegeneration model, and 20 healthy controls on a novel task highlighting each of these dimensions in scenarios eliciting Schadenfreude and envy. Compared with the AD and control groups, bvFTD patients obtained significantly higher scores on all dimensions for both emotions. Interestingly, the legal dimension for both envy and Schadenfreude elicited higher emotional scores than the deservingness and moral dimensions. Furthermore, correlational analyses in bvFTD showed that higher envy and Schadenfreude scores were associated with greater deficits in social cognition, inhibitory control, and behavior. Brain anatomy findings (restricted to bvFTD and controls) confirmed differences in how these groups process each dimension. Schadenfreude was associated with the ventral striatum in all subjects. Also, in bvFTD patients, increased Schadenfreude across dimensions was negatively correlated with regions supporting social-value rewards, mentalizing, and social cognition (frontal pole, temporal pole, angular gyrus and precuneus). In all subjects, all dimensions of envy positively correlated with the volume of the anterior cingulate cortex, a region involved in processing unfair social comparisons. By contrast, in bvFTD patients, the intensified experience of envy across all dimensions was negatively correlated with a set of areas subserving social cognition, including the prefrontal cortex, the parahippocampus, and the amygdala. Together, the present results provide the first lesion-based evidence for the multidimensional nature of the emotional experiences of envy and Schadenfreude. Moreover, this is the first demonstration of a selective exacerbation of envy and Schadenfreude in bvFTD patients, probably triggered by atrophy to social cognition networks. Our results offer new insights into the mechanisms subserving complex emotions and moral cognition in neurodegeneration, paving the way for groundbreaking research on their interaction with other cognitive, social, and emotional processes.Keywords: social cognition, moral emotions, neuroimaging, frontotemporal dementia
Procedia PDF Downloads 2999167 Intelligent Control of Bioprocesses: A Software Application
Authors: Mihai Caramihai, Dan Vasilescu
Abstract:
The main research objective of the experimental bioprocess analyzed in this paper was to obtain large biomass quantities. The bioprocess is performed in 100 L Bioengineering bioreactor with 42 L cultivation medium made of peptone, meat extract and sodium chloride. The reactor was equipped with pH, temperature, dissolved oxygen, and agitation controllers. The operating parameters were 37 oC, 1.2 atm, 250 rpm and air flow rate of 15 L/min. The main objective of this paper is to present a case study to demonstrate that intelligent control, describing the complexity of the biological process in a qualitative and subjective manner as perceived by human operator, is an efficient control strategy for this kind of bioprocesses. In order to simulate the bioprocess evolution, an intelligent control structure, based on fuzzy logic has been designed. The specific objective is to present a fuzzy control approach, based on human expert’ rules vs. a modeling approach of the cells growth based on bioprocess experimental data. The kinetic modeling may represent only a small number of bioprocesses for overall biosystem behavior while fuzzy control system (FCS) can manipulate incomplete and uncertain information about the process assuring high control performance and provides an alternative solution to non-linear control as it is closer to the real world. Due to the high degree of non-linearity and time variance of bioprocesses, the need of control mechanism arises. BIOSIM, an original developed software package, implements such a control structure. The simulation study has showed that the fuzzy technique is quite appropriate for this non-linear, time-varying system vs. the classical control method based on a priori model.Keywords: intelligent, control, fuzzy model, bioprocess optimization
Procedia PDF Downloads 3289166 Studying the Influence of the Intellectual Assets on Strategy Implementation: Case Study, Modiran Ideh Pardaz Company
Authors: Farzam Chakherlouy, Amirmehdi Dokhanchi
Abstract:
Nowadays organizations have to identify, evaluate and manage intangible assets which enable them to provide maximum requirements to achieve their goals and strategies. Organizations also have to try to promote and improve these kinds of assets continuously. It seems necessary to implement developed strategies in today’s competitive world where all the organizations and companies spend great amounts of expenses for developing their own strategies. In fact, after determining strategies to be implemented, the management process is not completed and it will not have any effect on the success and existence of the organization until these strategies are implemented. The objective of this article is to define the intellectual capital and it components and studying the impact of intellectual capital on the implementation of strategy based upon the Bozbura model. Three dimensions of human capital, relational capital, and the structural capital. According to the test’s results, the correlation between the intellectual capital and three components of strategic implementation (leadership, human resource management, and culture) has not been approved yet. According to results of Friedman’s test in relation with the intellectual capital, the maximum inadequacy of this company is in the field of human capital (with an average of 3.59) and the minimum inadequacy is in the field of the relational capital (customer) with an average of 2.83. Besides, according to Friedman test in relation with implementation of the strategy, the maximum inadequacy relates to the culture of the organization and the corporate control with averages of 2.60 and 3.45 respectively. In addition, they demonstrate a good performance in scopes of human resources management and financial resources management strategies.Keywords: Bozbura model, intellectual capital, strategic management, implementation of strategy, Modiran Ideh Pardaz company
Procedia PDF Downloads 4259165 A Two-Phase Flow Interface Tracking Algorithm Using a Fully Coupled Pressure-Based Finite Volume Method
Authors: Shidvash Vakilipour, Scott Ormiston, Masoud Mohammadi, Rouzbeh Riazi, Kimia Amiri, Sahar Barati
Abstract:
Two-phase and multi-phase flows are common flow types in fluid mechanics engineering. Among the basic and applied problems of these flow types, two-phase parallel flow is the one that two immiscible fluids flow in the vicinity of each other. In this type of flow, fluid properties (e.g. density, viscosity, and temperature) are different at the two sides of the interface of the two fluids. The most challenging part of the numerical simulation of two-phase flow is to determine the location of interface accurately. In the present work, a coupled interface tracking algorithm is developed based on Arbitrary Lagrangian-Eulerian (ALE) approach using a cell-centered, pressure-based, coupled solver. To validate this algorithm, an analytical solution for fully developed two-phase flow in presence of gravity is derived, and then, the results of the numerical simulation of this flow are compared with analytical solution at various flow conditions. The results of the simulations show good accuracy of the algorithm despite using a nearly coarse and uniform grid. Temporal variations of interface profile toward the steady-state solution show that a greater difference between fluids properties (especially dynamic viscosity) will result in larger traveling waves. Gravity effect studies also show that favorable gravity will result in a reduction of heavier fluid thickness and adverse gravity leads to increasing it with respect to the zero gravity condition. However, the magnitude of variation in favorable gravity is much more than adverse gravity.Keywords: coupled solver, gravitational force, interface tracking, Reynolds number to Froude number, two-phase flow
Procedia PDF Downloads 3189164 Finding Optimal Operation Condition in a Biological Nutrient Removal Process with Balancing Effluent Quality, Economic Cost and GHG Emissions
Authors: Seungchul Lee, Minjeong Kim, Iman Janghorban Esfahani, Jeong Tai Kim, ChangKyoo Yoo
Abstract:
It is hard to maintain the effluent quality of the wastewater treatment plants (WWTPs) under with fixed types of operational control because of continuously changed influent flow rate and pollutant load. The aims of this study is development of multi-loop multi-objective control (ML-MOC) strategy in plant-wide scope targeting four objectives: 1) maximization of nutrient removal efficiency, 2) minimization of operational cost, 3) maximization of CH4 production in anaerobic digestion (AD) for CH4 reuse as a heat source and energy source, and 4) minimization of N2O gas emission to cope with global warming. First, benchmark simulation mode is modified to describe N2O dynamic in biological process, namely benchmark simulation model for greenhouse gases (BSM2G). Then, three types of single-loop proportional-integral (PI) controllers for DO controller, NO3 controller, and CH4 controller are implemented. Their optimal set-points of the controllers are found by using multi-objective genetic algorithm (MOGA). Finally, multi loop-MOC in BSM2G is implemented and evaluated in BSM2G. Compared with the reference case, the ML-MOC with the optimal set-points showed best control performances than references with improved performances of 34%, 5% and 79% of effluent quality, CH4 productivity, and N2O emission respectively, with the decrease of 65% in operational cost.Keywords: Benchmark simulation model for greenhouse gas, multi-loop multi-objective controller, multi-objective genetic algorithm, wastewater treatment plant
Procedia PDF Downloads 5079163 Examination of Forged Signatures Printed by Means of Fabrication in Terms of Their Relation to the Perpetrator
Authors: Salim Yaren, Nergis Canturk
Abstract:
Signatures are signs that are handwritten by person in order to confirm values such as information, amount, meaning, time and undertaking that bear on a document. It is understood that the signature of a document and the accuracy of the information on the signature is accepted and approved. Forged signatures are formed by forger without knowing and seeing original signature of person that forger will imitate and as a result of his/her effort for hiding typical characteristics of his/her own signatures. Forged signatures are often signed by starting with the initials of the first and last name or persons of the persons whose fake signature will be signed. The similarities in the signatures are completely random. Within the scope of the study, forged signatures are collected from 100 people both their original signatures and forged signatures signed referring to 5 imaginary people. These signatures are compared for 14 signature analyzing criteria by 2 signature analyzing experts except the researcher. 1 numbered analyzing expert who is 9 year experience in his/her field evaluated signatures of 39 (39%) people right and of 25 (25%) people wrong and he /she made any evaluations for signatures of 36 (36%) people. 2 numbered analyzing expert who is 16 year experienced in his/her field evaluated signatures of 49 (49%) people right and 28 (28%) people wrong and he /she made any evaluations for signatures of 23 (23%) people. Forged signatures that are signed by 24 (24%) people are matched by two analyzing experts properly, forged signatures that are signed by 8 (8%) people are matched wrongfully and made up signatures that are signed by 12 (12%) people couldn't be decided by both analyzing experts. Signatures analyzing is a subjective topic so that analyzing and comparisons take form according to education, knowledge and experience of the expert. Consequently, due to the fact that 39% success is achieved by analyzing expert who has 9 year professional experience and 49% success is achieved by analyzing expert who has 16 year professional experience, it is seen that success rate is directly proportionate to knowledge and experience of the expert.Keywords: forensic signature, forensic signature analysis, signature analysis criteria, forged signature
Procedia PDF Downloads 1279162 Analytical Description of Disordered Structures in Continuum Models of Pattern Formation
Authors: Gyula I. Tóth, Shaho Abdalla
Abstract:
Even though numerical simulations indeed have a significant precursory/supportive role in exploring the disordered phase displaying no long-range order in pattern formation models, studying the stability properties of this phase and determining the order of the ordered-disordered phase transition in these models necessitate an analytical description of the disordered phase. First, we will present the results of a comprehensive statistical analysis of a large number (1,000-10,000) of numerical simulations in the Swift-Hohenberg model, where the bulk disordered (or amorphous) phase is stable. We will show that the average free energy density (over configurations) converges, while the variance of the energy density vanishes with increasing system size in numerical simulations, which suggest that the disordered phase is a thermodynamic phase (i.e., its properties are independent of the configuration in the macroscopic limit). Furthermore, the structural analysis of this phase in the Fourier space suggests that the phase can be modeled by a colored isotropic Gaussian noise, where any instant of the noise describes a possible configuration. Based on these results, we developed the general mathematical framework of finding a pool of solutions to partial differential equations in the sense of continuous probability measure, which we will present briefly. Applying the general idea to the Swift-Hohenberg model we show, that the amorphous phase can be found, and its properties can be determined analytically. As the general mathematical framework is not restricted to continuum theories, we hope that the proposed methodology will open a new chapter in studying disordered phases.Keywords: fundamental theory, mathematical physics, continuum models, analytical description
Procedia PDF Downloads 1409161 Effect of the Vertical Pressure on the Electrical Behaviour of the Micro-Copper Polyurethane Composite Films
Authors: Saeid Mehvari, Yolanda Sanchez-Vicente, Sergio González Sánchez, Khalid Lafdi
Abstract:
Abstract- Materials with a combination of transparency, electrical conductivity, and flexibility are required in the growing electronic sector. In this research, electrically conductive and flexible films have been prepared. These composite films consist of dispersing micro-copper particles into polyurethane (PU) matrix. Two sets of samples were made using both spin coating technique (sample thickness lower than 30 μm) and materials casting (sample thickness lower than 100 μm). Copper concentrations in the PU matrix varied from 0.5 to 20% by volume. The dispersion of micro-copper particles into polyurethane (PU) matrix were characterised using optical microscope and scanning electron microscope. The electrical conductivity measurement was carried out using home-made multimeter set up under pressures from 1 to 20 kPa through thickness and in plane direction. It seems that samples made by casting were not conductive. However, the sample made by spin coating shows through-thickness conductivity when they are under pressure. The results showed that spin-coated films with higher concentration of 2 vol. % of copper displayed a significant increase in the conductivity value, known as percolation threshold. The maximum conductivity of 7.2 × 10-1 S∙m-1 was reached at concentrations of filler with 20 vol. % at 20kPa. A semi-empirical model with adjustable coefficients was used to fit and predict the electrical behaviour of composites. For the first time, the finite element method based on the representative volume element (FE-RVE) was successfully used to predict their electrical behaviour under applied pressures. Keywords: electrical conductivity, micro copper, numerical simulation, percolation threshold, polyurethane, RVE model
Procedia PDF Downloads 2029160 Electronic Payment Recording with Payment History Retrieval Module: A System Software
Authors: Adrian Forca, Simeon Cainday III
Abstract:
The Electronic Payment Recording with Payment History Retrieval Module is developed intendedly for the College of Science and Technology. This system software innovates the manual process of recording the payments done in the department through the development of electronic payment recording system software shifting from the slow and time-consuming procedure to quick yet reliable and accurate way of recording payments because it immediately generates receipts for every transaction. As an added feature to its software process, generation of recorded payment report is integrated eliminating the manual reporting to a more easy and consolidated report. As an added feature to the system, all recorded payments of the students can be retrieved immediately making the system transparent and reliable payment recording software. Viewing the whole process, the system software will shift from the manual process to an organized software technology because the information will be stored in a logically correct and normalized database. Further, the software will be developed using the modern programming language and implement strict programming methods to validate all users accessing the system, evaluate all data passed into the system and information retrieved to ensure data accuracy and reliability. In addition, the system will identify the user and limit its access privilege to establish boundaries of the specific access to information allowed for the store, modify, and update making the information secure against unauthorized data manipulation. As a result, the System software will eliminate the manual procedure and replace with an innovative modern information technology resulting to the improvement of the whole process of payment recording fast, secure, accurate and reliable software innovations.Keywords: collection, information system, manual procedure, payment
Procedia PDF Downloads 1729159 Architectural Adaptation for Road Humps Detection in Adverse Light Scenario
Authors: Padmini S. Navalgund, Manasi Naik, Ujwala Patil
Abstract:
Road hump is a semi-cylindrical elevation on the road made across specific locations of the road. The vehicle needs to maneuver the hump by reducing the speed to avoid car damage and pass over the road hump safely. Road Humps on road surfaces, if identified in advance, help to maintain the security and stability of vehicles, especially in adverse visibility conditions, viz. night scenarios. We have proposed a deep learning architecture adaptation by implementing the MISH activation function and developing a new classification loss function called "Effective Focal Loss" for Indian road humps detection in adverse light scenarios. We captured images comprising of marked and unmarked road humps from two different types of cameras across South India to build a heterogeneous dataset. A heterogeneous dataset enabled the algorithm to train and improve the accuracy of detection. The images were pre-processed, annotated for two classes viz, marked hump and unmarked hump. The dataset from these images was used to train the single-stage object detection algorithm. We utilised an algorithm to synthetically generate reduced visible road humps scenarios. We observed that our proposed framework effectively detected the marked and unmarked hump in the images in clear and ad-verse light environments. This architectural adaptation sets up an option for early detection of Indian road humps in reduced visibility conditions, thereby enhancing the autonomous driving technology to handle a wider range of real-world scenarios.Keywords: Indian road hump, reduced visibility condition, low light condition, adverse light condition, marked hump, unmarked hump, YOLOv9
Procedia PDF Downloads 329158 Time-Dependent Reliability Analysis of Corrosion Affected Cast Iron Pipes with Mixed Mode Fracture
Authors: Chun-Qing Li, Guoyang Fu, Wei Yang
Abstract:
A significant portion of current water networks is made of cast iron pipes. Due to aging and deterioration with corrosion being the most predominant mechanism, the failure rate of cast iron pipes is very high. Although considerable research has been carried out in the past few decades, most are on the effect of corrosion on the structural capacity of pipes using strength theory as the failure criterion. This paper presents a reliability-based methodology for the assessment of corrosion affected cast iron pipe cracking failures. A nonlinear limit state function taking into account all three fracture modes is proposed for brittle metal pipes with mixed mode fracture. A stochastic model of the load effect is developed, and time-dependent reliability method is employed to quantify the probability of failure and predict the remaining service life. A case study is carried out using the proposed methodology, followed by sensitivity analysis to investigate the effects of the random variables on the probability of failure. It has been found that the larger the inclination angle or the Mode I fracture toughness is, the smaller the probability of pipe failure is. It has also been found that the multiplying and exponential coefficients k and n in the power law corrosion model and the internal pressure have the most influence on the probability of failure for cast iron pipes. The methodology presented in this paper can assist pipe engineers and asset managers in developing a risk-informed and cost-effective strategy for better management of corrosion-affected pipelines.Keywords: corrosion, inclined surface cracks, pressurized cast iron pipes, stress intensity
Procedia PDF Downloads 3269157 The Use of Polar Substituent Groups for Promoting Azo Disperse Dye Solubility and Reactivity for More Economic and Environmental Benign Applications: A Computational Study
Authors: Olaide O. Wahab, Lukman O. Olasunkanmi, Krishna K. Govender, Penny P. Govender
Abstract:
The economic and environmental challenges associated with azo disperse dyes applications are due to poor aqueous solubility and low degradation tendency which stems from low chemical reactivity. Poor aqueous solubility property of this group of dyes necessitates the use of dispersing agents which increase operational costs and also release toxic chemical components into the environment, while their low degradation tendency is due to the high stability of the azo functional group (-N=N-) in their chemical structures. To address these problems, this study investigated theoretically the effects of some polar substituents on the aqueous solubility and reactivity properties of disperse yellow (DY) 119 dye with a view to theoretically develop new azo disperse dyes with improved solubility in water and higher degradation tendency in the environment using DMol³ computational code. All calculations were carried out using the Becke and Perdew version of Volsko-Wilk-Nusair (VWN-BP) level of density functional theory in conjunction with double numerical basis set containing polarization function (DNP). The aqueous solubility determination was achieved with conductor-like screening model for realistic solvation (COSMO-RS) in conjunction with known empirical solubility model, while the reactivity was predicted using frontier molecular orbital calculations. Most of the new derivatives studied showed evidence of higher aqueous solubility and degradation tendency compared to the parent dye. We conclude that these derivatives are promising alternative dyes for more economic and environmental benign dyeing practice and therefore recommend them for synthesis.Keywords: aqueous solubility, azo disperse dye, degradation, disperse yellow 119, DMol³, reactivity
Procedia PDF Downloads 2089156 Modelling of Reactive Methodologies in Auto-Scaling Time-Sensitive Services With a MAPE-K Architecture
Authors: Óscar Muñoz Garrigós, José Manuel Bernabeu Aubán
Abstract:
Time-sensitive services are the base of the cloud services industry. Keeping low service saturation is essential for controlling response time. All auto-scalable services make use of reactive auto-scaling. However, reactive auto-scaling has few in-depth studies. This presentation shows a model for reactive auto-scaling methodologies with a MAPE-k architecture. Queuing theory can compute different properties of static services but lacks some parameters related to the transition between models. Our model uses queuing theory parameters to relate the transition between models. It associates MAPE-k related times, the sampling frequency, the cooldown period, the number of requests that an instance can handle per unit of time, the number of incoming requests at a time instant, and a function that describes the acceleration in the service's ability to handle more requests. This model is later used as a solution to horizontally auto-scale time-sensitive services composed of microservices, reevaluating the model’s parameters periodically to allocate resources. The solution requires limiting the acceleration of the growth in the number of incoming requests to keep a constrained response time. Business benefits determine such limits. The solution can add a dynamic number of instances and remains valid under different system sizes. The study includes performance recommendations to improve results according to the incoming load shape and business benefits. The exposed methodology is tested in a simulation. The simulator contains a load generator and a service composed of two microservices, where the frontend microservice depends on a backend microservice with a 1:1 request relation ratio. A common request takes 2.3 seconds to be computed by the service and is discarded if it takes more than 7 seconds. Both microservices contain a load balancer that assigns requests to the less loaded instance and preemptively discards requests if they are not finished in time to prevent resource saturation. When load decreases, instances with lower load are kept in the backlog where no more requests are assigned. If the load grows and an instance in the backlog is required, it returns to the running state, but if it finishes the computation of all requests and is no longer required, it is permanently deallocated. A few load patterns are required to represent the worst-case scenario for reactive systems: the following scenarios test response times, resource consumption and business costs. The first scenario is a burst-load scenario. All methodologies will discard requests if the rapidness of the burst is high enough. This scenario focuses on the number of discarded requests and the variance of the response time. The second scenario contains sudden load drops followed by bursts to observe how the methodology behaves when releasing resources that are lately required. The third scenario contains diverse growth accelerations in the number of incoming requests to observe how approaches that add a different number of instances can handle the load with less business cost. The exposed methodology is compared against a multiple threshold CPU methodology allocating/deallocating 10 or 20 instances, outperforming the competitor in all studied metrics.Keywords: reactive auto-scaling, auto-scaling, microservices, cloud computing
Procedia PDF Downloads 979155 Synthesis and Study of Properties of Polyaniline/Nickel Sulphide Nanocomposites
Authors: Okpaneje Onyinye Theresa, Ugwu Laeticia Udodiri, Okereke Ngozi Agatha, Okoli Nonso Livinus
Abstract:
This work is on the synthesis and study of the optical characterization of polyaniline/nickel sulphide nanocomposite. Polyaniline (PANI) and nickel sulphide (NiS) nanoparticles were synthesized by oxidative chemical polymerization and sol-gel method. The polyaniline nickel sulphide nanocomposites with various concentrations of NiS were synthesized by in-situ polymerization of aniline monomer. In each case, the nickel sulphide nanoparticles were uniformly dispersed in the aniline hydrochloride before the initiation of oxidative chemical polymerization using ammonium persulphate. The samples formed were subjected to optical characterization using an ultraviolet (UV)-visible light (VIS) spectrophotometer (model: 756S UV – VIS). Optical analysis of the synthesized nanoparticles and nanocomposites showed absorption of radiation within VIS regions. The Tauc model was used to obtain the optical band gap. Energy band gap values of PANI and NiS were found to be 2.50 eV and 1.95 eV, respectively. PANI/NiSnanocomposites has an energy band gap that decreased from 2.25 eV to 1.90 eV as the amount of NiS increased (from 0.5g to 2.0g). These optical results showed that these nanocomposites are potential materials to be considered in solar cells and optoelectronics devices. The structural analysis confirmed the formation of polyaniline and hexagonal nickel sulphide with an average crystallite size of 25.521 nm, while average crystallite sizes of PANI/NiSnanocomposites ranged from 19.458 nm to 25.108 nm. Average particle sizes obtained from the SEM images ranged from 23.24 nm to 51.88 nm. Compositional results confirmed the presence of desired elements that made up the nanoparticles and nanocomposites.Keywords: polyaniline, nickel sulphide, polyaniline-nickel sulphide nanocomposite, optical characterization, structural analysis, morphological properties, compositional properties
Procedia PDF Downloads 1219154 Electrochemical APEX for Genotyping MYH7 Gene: A Low Cost Strategy for Minisequencing of Disease Causing Mutations
Authors: Ahmed M. Debela, Mayreli Ortiz , Ciara K. O´Sullivan
Abstract:
The completion of the human genome Project (HGP) has paved the way for mapping the diversity in the overall genome sequence which helps to understand the genetic causes of inherited diseases and susceptibility to drugs or environmental toxins. Arrayed primer extension (APEX) is a microarray based minisequencing strategy for screening disease causing mutations. It is derived from Sanger DNA sequencing and uses fluorescently dideoxynucleotides (ddNTPs) for termination of a growing DNA strand from a primer with its 3´- end designed immediately upstream of a site where single nucleotide polymorphism (SNP) occurs. The use of DNA polymerase offers a very high accuracy and specificity to APEX which in turn happens to be a method of choice for multiplex SNP detection. Coupling the high specificity of this method with the high sensitivity, low cost and compatibility for miniaturization of electrochemical techniques would offer an excellent platform for detection of mutation as well as sequencing of DNA templates. We are developing an electrochemical APEX for the analysis of SNPs found in the MYH7 gene for group of cardiomyopathy patients. ddNTPs were labeled with four different redox active compounds with four distinct potentials. Thiolated oligonucleotide probes were immobilised on gold and glassy carbon substrates which are followed by hybridisation with complementary target DNA just adjacent to the base to be extended by polymerase. Electrochemical interrogation was performed after the incorporation of the redox labelled dedioxynucleotide. The work involved the synthesis and characterisation of the redox labelled ddNTPs, optimisation and characterisation of surface functionalisation strategies and the nucleotide incorporation assays.Keywords: array based primer extension, labelled ddNTPs, electrochemical, mutations
Procedia PDF Downloads 2529153 Cost Benefit Analysis: Evaluation among the Millimetre Wavebands and SHF Bands of Small Cell 5G Networks
Authors: Emanuel Teixeira, Anderson Ramos, Marisa Lourenço, Fernando J. Velez, Jon M. Peha
Abstract:
This article discusses the benefit cost analysis aspects of millimetre wavebands (mmWaves) and Super High Frequency (SHF). The devaluation along the distance of the carrier-to-noise-plus-interference ratio with the coverage distance is assessed by considering two different path loss models, the two-slope urban micro Line-of-Sight (UMiLoS) for the SHF band and the modified Friis propagation model, for frequencies above 24 GHz. The equivalent supported throughput is estimated at the 5.62, 28, 38, 60 and 73 GHz frequency bands and the influence of carrier-to-noise-plus-interference ratio in the radio and network optimization process is explored. Mostly owing to the lessening caused by the behaviour of the two-slope propagation model for SHF band, the supported throughput at this band is higher than at the millimetre wavebands only for the longest cell lengths. The benefit cost analysis of these pico-cellular networks was analysed for regular cellular topologies, by considering the unlicensed spectrum. For shortest distances, we can distinguish an optimal of the revenue in percentage terms for values of the cell length, R ≈ 10 m for the millimeter wavebands and for longest distances an optimal of the revenue can be observed at R ≈ 550 m for the 5.62 GHz. It is possible to observe that, for the 5.62 GHz band, the profit is slightly inferior than for millimetre wavebands, for the shortest Rs, and starts to increase for cell lengths approximately equal to the ratio between the break-point distance and the co-channel reuse factor, achieving a maximum for values of R approximately equal to 550 m.Keywords: millimetre wavebands, SHF band, SINR, cost benefit analysis, 5G
Procedia PDF Downloads 1489152 Development and Validation Method for Quantitative Determination of Rifampicin in Human Plasma and Its Application in Bioequivalence Test
Authors: Endang Lukitaningsih, Fathul Jannah, Arief R. Hakim, Ratna D. Puspita, Zullies Ikawati
Abstract:
Rifampicin is a semisynthetic antibiotic derivative of rifamycin B produced by Streptomyces mediterranei. RIF has been used worldwide as first line drug-prescribed throughout tuberculosis therapy. This study aims to develop and to validate an HPLC method couple with a UV detection for determination of rifampicin in spiked human plasma and its application for bioequivalence study. The chromatographic separation was achieved on an RP-C18 column (LachromHitachi, 250 x 4.6 mm., 5μm), utilizing a mobile phase of phosphate buffer/acetonitrile (55:45, v/v, pH 6.8 ± 0.1) at a flow of 1.5 mL/min. Detection was carried out at 337 nm by using spectrophotometer. The developed method was statistically validated for the linearity, accuracy, limit of detection, limit of quantitation, precise and specifity. The specifity of the method was ascertained by comparing chromatograms of blank plasma and plasma containing rifampicin; the matrix and rifampicin were well separated. The limit of detection and limit of quantification were 0.7 µg/mL and 2.3 µg/mL, respectively. The regression curve of standard was linear (r > 0.999) over a range concentration of 20.0 – 100.0 µg/mL. The mean recovery of the method was 96.68 ± 8.06 %. Both intraday and interday precision data showed reproducibility (R.S.D. 2.98% and 1.13 %, respectively). Therefore, the method can be used for routine analysis of rifampicin in human plasma and in bioequivalence study. The validated method was successfully applied in pharmacokinetic and bioequivalence study of rifampicin tablet in a limited number of subjects (under an Ethical Clearance No. KE/FK/6201/EC/2015). The mean values of Cmax, Tmax, AUC(0-24) and AUC(o-∞) for the test formulation of rifampicin were 5.81 ± 0.88 µg/mL, 1.25 hour, 29.16 ± 4.05 µg/mL. h. and 29.41 ± 4.07 µg/mL. h., respectively. Meanwhile for the reference formulation, the values were 5.04 ± 0.54 µg/mL, 1.31 hour, 27.20 ± 3.98 µg/mL.h. and 27.49 ± 4.01 µg/mL.h. From bioequivalence study, the 90% CIs for the test formulation/reference formulation ratio for the logarithmic transformations of Cmax and AUC(0-24) were 97.96-129.48% and 99.13-120.02%, respectively. According to the bioequivamence test guidelines of the European Commission-European Medicines Agency, it can be concluded that the test formulation of rifampicin is bioequivalence with the reference formulation.Keywords: validation, HPLC, plasma, bioequivalence
Procedia PDF Downloads 2959151 Time Pressure and Its Effect at Tactical Level of Disaster Management
Authors: Agoston Restas
Abstract:
Introduction: In case of managing disasters decision makers can face many times such a special situation where any pre-sign of the drastically change is missing therefore the improvised decision making can be required. The complexity, ambiguity, uncertainty or the volatility of the situation can require many times the improvisation as decision making. It can be taken at any level of the management (strategic, operational and tactical) but at tactical level the main reason of the improvisation is surely time pressure. It is certainly the biggest problem during the management. Methods: The author used different tools and methods to achieve his goals; one of them was the study of the relevant literature, the other one was his own experience as a firefighting manager. Other results come from two surveys that are referred to; one of them was an essay analysis, the second one was a word association test, specially created for the research. Results and discussion: This article proves that, in certain situations, the multi-criteria, evaluating decision-making processes simply cannot be used or only in a limited manner. However, it can be seen that managers, directors or commanders are many times in situations that simply cannot be ignored when making decisions which should be made in a short time. The functional background of decisions made in a short time, their mechanism, which is different from the conventional, was studied lately and this special decision procedure was given the name recognition-primed decision. In the article, author illustrates the limits of the possibilities of analytical decision-making, presents the general operating mechanism of recognition-primed decision-making, elaborates on its special model relevant to managers at tactical level, as well as explore and systemize the factors that facilitate (catalyze) the processes with an example with fire managers.Keywords: decision making, disaster managers, recognition primed decision, model for making decisions in emergencies
Procedia PDF Downloads 2629150 Process Development of pVAX1/lacZ Plasmid DNA Purification Using Design of Experiment
Authors: Asavasereerat K., Teacharsripaitoon T., Tungyingyong P., Charupongrat S., Noppiboon S. Hochareon L., Kitsuban P.
Abstract:
Third generation of vaccines is based on gene therapy where DNA is introduced into patients. The antigenic or therapeutic proteins encoded from transgenes DNA triggers an immune-response to counteract various diseases. Moreover, DNA vaccine offers the customization of its ability on protection and treatment with high stability. The production of DNA vaccines become of interest. According to USFDA guidance for industry, the recommended limits for impurities from host cell are lower than 1%, and the active conformation homogeneity supercoiled DNA, is more than 80%. Thus, the purification strategy using two-steps chromatography has been established and verified for its robustness. Herein, pVax1/lacZ, a pre-approved USFDA DNA vaccine backbone, was used and transformed into E. coli strain DH5α. Three purification process parameters including sample-loading flow rate, the salt concentration in washing and eluting buffer, were studied and the experiment was designed using response surface method with central composite face-centered (CCF) as a model. The designed range of selected parameters was 10% variation from the optimized set point as a safety factor. The purity in the percentage of supercoiled conformation obtained from each chromatography step, AIEX and HIC, were analyzed by HPLC. The response data were used to establish regression model and statistically analyzed followed by Monte Carlo simulation using SAS JMP. The results on the purity of the product obtained from AIEX and HIC are between 89.4 to 92.5% and 88.3 to 100.0%, respectively. Monte Carlo simulation showed that the pVAX1/lacZ purification process is robust with confidence intervals of 0.90 in range of 90.18-91.00% and 95.88-100.00%, for AIEX and HIC respectively.Keywords: AIEX, DNA vaccine, HIC, puification, response surface method, robustness
Procedia PDF Downloads 2119149 Design and Implementation of Collaborative Editing System Based on Physical Simulation Engine Running State
Authors: Zhang Songning, Guan Zheng, Ci Yan, Ding Gangyi
Abstract:
The application of physical simulation engines in collaborative editing systems has an important background and role. Firstly, physical simulation engines can provide real-world physical simulations, enabling users to interact and collaborate in real time in virtual environments. This provides a more intuitive and immersive experience for collaborative editing systems, allowing users to more accurately perceive and understand various elements and operations in collaborative editing. Secondly, through physical simulation engines, different users can share virtual space and perform real-time collaborative editing within it. This real-time sharing and collaborative editing method helps to synchronize information among team members and improve the efficiency of collaborative work. Through experiments, the average model transmission speed of a single person in the collaborative editing system has increased by 141.91%; the average model processing speed of a single person has increased by 134.2%; the average processing flow rate of a single person has increased by 175.19%; the overall efficiency improvement rate of a single person has increased by 150.43%. With the increase in the number of users, the overall efficiency remains stable, and the physical simulation engine running status collaborative editing system also has horizontal scalability. It is not difficult to see that the design and implementation of a collaborative editing system based on physical simulation engines not only enriches the user experience but also optimizes the effectiveness of team collaboration, providing new possibilities for collaborative work.Keywords: physics engine, simulation technology, collaborative editing, system design, data transmission
Procedia PDF Downloads 919148 Mutual Fund Anchoring Bias with its Parent Firm Performance: Evidence from Mutual Fund Industry of Pakistan
Authors: Muhammad Tahir
Abstract:
Purpose The purpose of the study is to find anchoring bias behavior in mutual fund return with its parent firm performance in Pakistan. Research Methodology The paper used monthly returns of equity funds whose parent firm exist from 2011 to 2021, along with parent firm return. Proximity to 52-week highest return calculated by dividing fund return by parent firm 52-week highest return. Control variables are also taken and used pannel regression model to estimate our results. For robust results, we also used feasible generalize least square (FGLS) model. Findings The results showed that there exist anchoring biased in mutual fund return with its parent firm performance. The FGLS results reaffirms the same results as obtained from panner regression results. Proximity to 52-week highest Xc is significant in both models. Research Implication Since most of mutual funds has a parent firm, anchoring behavior biased found in mutual fund with its parent firm performance. Practical Implication Mutual fund investors in Pakistan invest in equity funds in which behavioral bias exist, although there might be better opportunity in market. Originality/Value Addition Our research is a pioneer study to investigate anchoring bias in mutual fund return with its parent firm performance. Research limitations Our sample is limited to only 23 equity funds, which has a parent firm and data was available from 2011 to 2021.Keywords: mutual fund, anchoring bias, 52-week high return, proximity to 52-week high, parent firm performance, pannel regression, FGLS
Procedia PDF Downloads 1229147 Application of Hydrological Model in Support of Streamflow Allocation in Arid Watersheds in Northwestern China
Authors: Chansheng He, Lanhui Zhang, Baoqing Zhang
Abstract:
Spatial heterogeneity of landscape significantly affects watershed hydrological processes, particularly in high elevation and cold mountainous watersheds such as the inland river (terminal lake) basins in Northwest China, where the upper reach mountainous areas are the main source of streamflow for the downstream agricultural oases and desert ecosystems. Thus, it is essential to take into account spatial variations of hydrological processes in streamflow allocation at the watershed scale. This paper adapts the Distributed Large Basin Runoff Model (DLBRM) to the Heihe River Watershed, the second largest inland river with a drainage area of about 128,000 km2 in Northwest China, for understanding the transfer and partitioning mechanism among the glacier and snowmelt, surface runoff, evapotranspiration, and groundwater recharge among the upper, middle, and lower reaches in the study area. Results indicate that the upper reach Qilian Mountain area is the main source of streamflow for the middle reach agricultural oasis and downstream desert areas. Large withdrawals for agricultural irrigation in the middle reach had significantly depleted river flow for the lower reach desert ecosystems. Innovative conservation and enforcement programs need to be undertaken to ensure the successful implementation of water allocation plan of delivering 0.95 x 109 m3 of water downstream annually by the State Council in the Heihe River Watershed.Keywords: DLBRM, Northwestern China, spatial variation, water allocation
Procedia PDF Downloads 3069146 Mammographic Multi-View Cancer Identification Using Siamese Neural Networks
Authors: Alisher Ibragimov, Sofya Senotrusova, Aleksandra Beliaeva, Egor Ushakov, Yuri Markin
Abstract:
Mammography plays a critical role in screening for breast cancer in women, and artificial intelligence has enabled the automatic detection of diseases in medical images. Many of the current techniques used for mammogram analysis focus on a single view (mediolateral or craniocaudal view), while in clinical practice, radiologists consider multiple views of mammograms from both breasts to make a correct decision. Consequently, computer-aided diagnosis (CAD) systems could benefit from incorporating information gathered from multiple views. In this study, the introduce a method based on a Siamese neural network (SNN) model that simultaneously analyzes mammographic images from tri-view: bilateral and ipsilateral. In this way, when a decision is made on a single image of one breast, attention is also paid to two other images – a view of the same breast in a different projection and an image of the other breast as well. Consequently, the algorithm closely mimics the radiologist's practice of paying attention to the entire examination of a patient rather than to a single image. Additionally, to the best of our knowledge, this research represents the first experiments conducted using the recently released Vietnamese dataset of digital mammography (VinDr-Mammo). On an independent test set of images from this dataset, the best model achieved an AUC of 0.87 per image. Therefore, this suggests that there is a valuable automated second opinion in the interpretation of mammograms and breast cancer diagnosis, which in the future may help to alleviate the burden on radiologists and serve as an additional layer of verification.Keywords: breast cancer, computer-aided diagnosis, deep learning, multi-view mammogram, siamese neural network
Procedia PDF Downloads 1419145 Evaluating the Implementation of a Quality Management System in the COVID-19 Diagnostic Laboratory of a Tertiary Care Hospital in Delhi
Authors: Sukriti Sabharwal, Sonali Bhattar, Shikhar Saxena
Abstract:
Introduction: COVID-19 molecular diagnostic laboratory is the cornerstone of the COVID-19 disease diagnosis as the patient’s treatment and management protocol depend on the molecular results. For this purpose, it is extremely important that the laboratory conducting these results adheres to the quality management processes to increase the accuracy and validity of the reports generated. We started our own molecular diagnostic setup at the onset of the pandemic. Therefore, we conducted this study to generate our quality management data to help us in improving on our weak points. Materials and Methods: A total of 14561 samples were evaluated by the retrospective observational method. The quality variables analysed were classified into pre-analytical, analytical, and post-analytical variables, and the results were presented in percentages. Results: Among the pre-analytical variables, sample leaking was the most common cause of the rejection of samples (134/14561, 0.92%), followed by non-generation of SRF ID (76/14561, 0.52%) and non-compliance to triple packaging (44/14561, 0.3%). The other pre-analytical aspects assessed were incomplete patient identification (17/14561, 0.11%), insufficient quantity of samples (12/14561, 0.08%), missing forms/samples (7/14561, 0.04%), samples in the wrong vials/empty VTM tubes (5/14561, 0.03%) and LIMS entry not done (2/14561, 0.01%). We are unable to obtain internal quality control in 0.37% of samples (55/14561). We also experienced two incidences of cross-contamination among the samples resulting in false-positive results. Among the post-analytical factors, a total of 0.07% of samples (11/14561) could not be dispatched within the stipulated time frame. Conclusion: Adherence to quality control processes is foremost for the smooth running of any diagnostic laboratory, especially the ones involved in critical reporting. Not only do the indicators help in keeping in check the laboratory parameters but they also allow comparison with other laboratories.Keywords: laboratory quality management, COVID-19, molecular diagnostics, healthcare
Procedia PDF Downloads 1689144 To Design an Architectural Model for On-Shore Oil Monitoring Using Wireless Sensor Network System
Authors: Saurabh Shukla, G. N. Pandey
Abstract:
In recent times, oil exploration and monitoring in on-shore areas have gained much importance considering the fact that in India the oil import is 62 percent of the total imports. Thus, architectural model like wireless sensor network to monitor on-shore deep sea oil well is being developed to get better estimate of the oil prospects. The problem we are facing nowadays that we have very few restricted areas of oil left today. Countries like India don’t have much large areas and resources for oil and this problem with most of the countries that’s why it has become a major problem when we are talking about oil exploration in on-shore areas also the increase of oil prices has further ignited the problem. For this the use of wireless network system having relative simplicity, smallness in size and affordable cost of wireless sensor nodes permit heavy deployment in on-shore places for monitoring oil wells. Deployment of wireless sensor network in large areas will surely reduce the cost it will be very much cost effective. The objective of this system is to send real time information of oil monitoring to the regulatory and welfare authorities so that suitable action could be taken. This system architecture is composed of sensor network, processing/transmission unit and a server. This wireless sensor network system could remotely monitor the real time data of oil exploration and monitoring condition in the identified areas. For wireless sensor networks, the systems are wireless, have scarce power, are real-time, utilize sensors and actuators as interfaces, have dynamically changing sets of resources, aggregate behaviour is important and location is critical. In this system a communication is done between the server and remotely placed sensors. The server gives the real time oil exploration and monitoring conditions to the welfare authorities.Keywords: sensor, wireless sensor network, oil, sensor, on-shore level
Procedia PDF Downloads 453