Search results for: signal generator-fault indicator
1661 Target-Triggered DNA Motors and their Applications to Biosensing
Authors: Hongquan Zhang
Abstract:
Inspired by endogenous protein motors, researchers have constructed various synthetic DNA motors based on the specificity and predictability of Watson-Crick base pairing. However, the application of DNA motors to signal amplification and biosensing is limited because of low mobility and difficulty in real-time monitoring of the walking process. The objective of our work was to construct a new type of DNA motor termed target-triggered DNA motors that can walk for hundreds of steps in response to a single target binding event. To improve the mobility and processivity of DNA motors, we used gold nanoparticles (AuNPs) as scaffolds to build high-density, three-dimensional tracks. Hundreds of track strands are conjugated to a single AuNP. To enable DNA motors to respond to specific protein and nucleic acid targets, we adapted the binding-induced DNA assembly into the design of the target-triggered DNA motors. In response to the binding of specific target molecules, DNA motors are activated to autonomously walk along AuNP, which is powered by a nicking endonuclease or DNAzyme-catalyzed cleavage of track strands. Each moving step restores the fluorescence of a dye molecule, enabling monitoring of the operation of DNA motors in real time. The motors can translate a single binding event into the generation of hundreds of oligonucleotides from a single nanoparticle. The motors have been applied to amplify the detection of proteins and nucleic acids in test tubes and live cells. The motors were able to detect low pM concentrations of specific protein and nucleic acid targets in homogeneous solutions without the need for separation. Target-triggered DNA motors are significant for broadening applications of DNA motors to molecular sensing, cell imagining, molecular interaction monitoring, and controlled delivery and release of therapeutics.Keywords: biosensing, DNA motors, gold nanoparticles, signal amplification
Procedia PDF Downloads 891660 Power Quality Modeling Using Recognition Learning Methods for Waveform Disturbances
Authors: Sang-Keun Moon, Hong-Rok Lim, Jin-O Kim
Abstract:
This paper presents a Power Quality (PQ) modeling and filtering processes for the distribution system disturbances using recognition learning methods. Typical PQ waveforms with mathematical applications and gathered field data are applied to the proposed models. The objective of this paper is analyzing PQ data with respect to monitoring, discriminating, and evaluating the waveform of power disturbances to ensure the system preventative system failure protections and complex system problem estimations. Examined signal filtering techniques are used for the field waveform noises and feature extractions. Using extraction and learning classification techniques, the efficiency was verified for the recognition of the PQ disturbances with focusing on interactive modeling methods in this paper. The waveform of selected 8 disturbances is modeled with randomized parameters of IEEE 1159 PQ ranges. The range, parameters, and weights are updated regarding field waveform obtained. Along with voltages, currents have same process to obtain the waveform features as the voltage apart from some of ratings and filters. Changing loads are causing the distortion in the voltage waveform due to the drawing of the different patterns of current variation. In the conclusion, PQ disturbances in the voltage and current waveforms indicate different types of patterns of variations and disturbance, and a modified technique based on the symmetrical components in time domain was proposed in this paper for the PQ disturbances detection and then classification. Our method is based on the fact that obtained waveforms from suggested trigger conditions contain potential information for abnormality detections. The extracted features are sequentially applied to estimation and recognition learning modules for further studies.Keywords: power quality recognition, PQ modeling, waveform feature extraction, disturbance trigger condition, PQ signal filtering
Procedia PDF Downloads 1911659 A Straightforward Approach for Determining the Weights of Decision Makers Based on Angle Cosine and Projection Method
Authors: Qiang Yang, Ping-An Du
Abstract:
Group decision making with multiple attribute has attracted intensive concern in the decision analysis area. This paper assumes that the contributions of all the decision makers (DMs) are not equal to the decision process based on different knowledge and experience in group setting. The aim of this paper is to develop a novel approach to determine weights of DMs in the group decision making problems. In this paper, the weights of DMs are determined in the group decision environment via angle cosine and projection method. First of all, the average decision of all individual decisions is defined as the ideal decision. After that, we define the weight of each decision maker (DM) by aggregating the angle cosine and projection between individual decision and ideal decision with associated direction indicator μ. By using the weights of DMs, all individual decisions are aggregated into a collective decision. Further, the preference order of alternatives is ranked in accordance with the overall row value of collective decision. Finally, an example in a chemical company is provided to illustrate the developed approach.Keywords: angel cosine, ideal decision, projection method, weights of decision makers
Procedia PDF Downloads 3811658 New Methodology for Monitoring Alcoholic Fermentation Processes Using Refractometry
Authors: Boukhiar Aissa, Iguergaziz Nadia, Halladj Fatima, Lamrani Yasmina, Benamara Salem
Abstract:
Determining the alcohol content in alcoholic fermentation bioprocess has a great importance. In fact, it is a key indicator for monitoring this fermentation bioprocess. Several methodologies (chemical, spectrophotometric, chromatographic...) are used to the determination of this parameter. However, these techniques are very long and require: rigorous preparations, sometimes dangerous chemical reagents, and/or expensive equipment. In the present study, the date juice is used as a substrate of alcoholic fermentation. The extracted juice undergoes an alcoholic fermentation by Saccharomyces cerevisiae. The study of the possible use of refractometry as a sole means for the in situ control of this process revealed a good correlation (R2 = 0.98) between initial and final ° Brix: ° Brix f = 0.377× ° Brixi. In addition, we verified the relationship between the variation in final and initial ° Brix (Δ ° Brix) and alcoholic rate produced (A exp): CΔ° Brix / A exp = 1.1. This allows the tracing of abacus isoresponses that permit to determine the alcoholic and residual sugar rates with a mean relative error (MRE) of 5.35%.Keywords: refractometry, alcohol, residual sugar, fermentation, brix, date, juice
Procedia PDF Downloads 4841657 EIS Study of the Corrosion Behavior of an Organic Coating Applied on Algerian Oil Tanker in Sea Water
Authors: Nadia Hammouda, Kamel Belmokre
Abstract:
Organic coatings are widely employed in the corrosion protection of most metal surfaces, particularly steel. They provide a barrier against corrosive species present in the environment, due to their high resistance to oxygen, water and ions transport. This study focuses on the evaluation of corrosion protection performance of epoxy paint on the carbon steel surface in sea water by Electrochemical Impedance Spectroscopy (EIS). The electrochemical behavior of painted surface was estimated by EIS parameters that contained paint film resistance, paint film capacitance and double layer capacitance. On the basis of calculation using EIS spectrums it was observed that pore resistance (Rpore) decreased with the appearance of doubled layer capacitance (Cdl) due to the electrolyte penetration through the film. This was further confirmed by the decrease of diffusion resistance (Rd) which was also the indicator of the deterioration of paint film protectiveness.Keywords: epoxy paints, carbon steel, electrochemical impedance spectroscopy, corrosion mechanisms, sea water
Procedia PDF Downloads 3761656 Formulating a Flexible-Spread Fuzzy Regression Model Based on Dissemblance Index
Authors: Shih-Pin Chen, Shih-Syuan You
Abstract:
This study proposes a regression model with flexible spreads for fuzzy input-output data to cope with the situation that the existing measures cannot reflect the actual estimation error. The main idea is that a dissemblance index (DI) is carefully identified and defined for precisely measuring the actual estimation error. Moreover, the graded mean integration (GMI) representation is adopted for determining more representative numeric regression coefficients. Notably, to comprehensively compare the performance of the proposed model with other ones, three different criteria are adopted. The results from commonly used test numerical examples and an application to Taiwan's business monitoring indicator illustrate that the proposed dissemblance index method not only produces valid fuzzy regression models for fuzzy input-output data, but also has satisfactory and stable performance in terms of the total estimation error based on these three criteria.Keywords: dissemblance index, forecasting, fuzzy sets, linear regression
Procedia PDF Downloads 3661655 An Accurate Computer-Aided Diagnosis: CAD System for Diagnosis of Aortic Enlargement by Using Convolutional Neural Networks
Authors: Mahdi Bazarganigilani
Abstract:
Aortic enlargement, also known as an aortic aneurysm, can occur when the walls of the aorta become weak. This disease can become deadly if overlooked and undiagnosed. In this paper, a computer-aided diagnosis (CAD) system was introduced to accurately diagnose aortic enlargement from chest x-ray images. An enhanced convolutional neural network (CNN) was employed and then trained by transfer learning by using three different main areas from the original images. The areas included the left lung, heart, and right lung. The accuracy of the system was then evaluated on 1001 samples by using 4-fold cross-validation. A promising accuracy of 90% was achieved in terms of the F-measure indicator. The results showed using different areas from the original image in the training phase of CNN could increase the accuracy of predictions. This encouraged the author to evaluate this method on a larger dataset and even on different CAD systems for further enhancement of this methodology.Keywords: computer-aided diagnosis systems, aortic enlargement, chest X-ray, image processing, convolutional neural networks
Procedia PDF Downloads 1671654 Vibration Based Structural Health Monitoring of Connections in Offshore Wind Turbines
Authors: Cristobal García
Abstract:
The visual inspection of bolted joints in wind turbines is dangerous, expensive, and impractical due to the non-possibility to access the platform by workboat in certain sea state conditions, as well as the high costs derived from the transportation of maintenance technicians to offshore platforms located far away from the coast, especially if helicopters are involved. Consequently, the wind turbine operators have the need for simpler and less demanding techniques for the analysis of the bolts tightening. Vibration-based structural health monitoring is one of the oldest and most widely-used means for monitoring the health of onshore and offshore wind turbines. The core of this work is to find out if the modal parameters can be efficiently used as a key performance indicator (KPIs) for the assessment of joint bolts in a 1:50 scale tower of a floating offshore wind turbine (12 MW). A non-destructive vibration test is used to extract the vibration signals of the towers with different damage statuses. The procedure can be summarized in three consecutive steps. First, an artificial excitation is introduced by means of a commercial shaker mounted on the top of the tower. Second, the vibration signals of the towers are recorded for 8 s at a sampling rate of 20 kHz using an array of commercial accelerometers (Endevco, 44A16-1032). Third, the natural frequencies, damping, and overall vibration mode shapes are calculated using the software Siemens LMS 16A. Experiments show that the natural frequencies, damping, and mode shapes of the tower are directly dependent on the fixing conditions of the towers, and therefore, the variations of both parameters are a good indicator for the estimation of the static axial force acting in the bolt. Thus, this vibration-based structural method proposed can be potentially used as a diagnostic tool to evaluate the tightening torques of the bolted joints with the advantages of being an economical, straightforward, and multidisciplinary approach that can be applied for different typologies of connections by operation and maintenance technicians. In conclusion, TSI, in collaboration with the consortium of the FIBREGY project, is conducting innovative research where vibrations are utilized for the estimation of the tightening torque of a 1:50 scale steel-based tower prototype. The findings of this research carried out in the context of FIBREGY possess multiple implications for the assessment of the bolted joint integrity in multiple types of connections such as tower-to-nacelle, modular, tower-to-column, tube-to-tube, etc. This research is contextualized in the framework of the FIBREGY project. The EU-funded FIBREGY project (H2020, grant number 952966) will evaluate the feasibility of the design and construction of a new generation of marine renewable energy platforms using lightweight FRP materials in certain structural elements (e.g., tower, floating platform). The FIBREGY consortium is composed of 11 partners specialized in the offshore renewable energy sector and funded partially by the H2020 program of the European Commission with an overall budget of 8 million Euros.Keywords: SHM, vibrations, connections, floating offshore platform
Procedia PDF Downloads 1301653 X-Ray Detector Technology Optimization In CT Imaging
Authors: Aziz Ikhlef
Abstract:
Most of multi-slices CT scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80kVp and 140kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts
Procedia PDF Downloads 2791652 Qualitative Measurement of Literacy
Authors: Indrajit Ghosh, Jaydip Roy
Abstract:
Literacy rate is an important indicator for measurement of human development. But this is not a good one to capture the qualitative dimension of educational attainment of an individual or a society. The overall educational level of an area is an important issue beyond the literacy rate. The overall educational level can be thought of as an outcome of the educational levels of individuals. But there is no well-defined algorithm and mathematical model available to measure the overall educational level of an area. A heuristic approach based on accumulated experience of experts is effective one. It is evident that fuzzy logic offers a natural and convenient framework in modeling various concepts in social science domain. This work suggests the implementation of fuzzy logic to develop a mathematical model for measurement of educational attainment of an area in terms of Education Index. The contribution of the study is two folds: conceptualization of “Education Profile” and proposing a new mathematical model to measure educational attainment in terms of “Education Index”.Keywords: education index, education profile, fuzzy logic, literacy
Procedia PDF Downloads 3211651 Development of an Electrochemical Aptasensor for the Detection of Human Osteopontin Protein
Authors: Sofia G. Meirinho, Luis G. Dias, António M. Peres, Lígia R. Rodrigues
Abstract:
The emerging development of electrochemical aptasen sors has enabled the easy and fast detection of protein biomarkers in standard and real samples. Biomarkers are produced by body organs or tumours and provide a measure of antigens on cell surfaces. When detected in high amounts in blood, they can be suggestive of tumour activity. These biomarkers are more often used to evaluate treatment effects or to assess the potential for metastatic disease in patients with established disease. Osteopontin (OPN) is a protein found in all body fluids and constitutes a possible biomarker because its overexpression has been related with breast cancer evolution and metastasis. Currently, biomarkers are commonly used for the development of diagnostic methods, allowing the detection of the disease in its initial stages. A previously described RNA aptamer was used in the current work to develop a simple and sensitive electrochemical aptasensor with high affinity for human OPN. The RNA aptamer was biotinylated and immobilized on a gold electrode by avidin-biotin interaction. The electrochemical signal generated from the aptamer–target molecule interaction was monitored electrochemically using cyclic voltammetry in the presence of [Fe (CN) 6]−3/− as a redox probe. The signal observed showed a current decrease due to the binding of OPN. The preliminary results showed that this aptasensor enables the detection of OPN in standard solutions, showing good selectivity towards the target in the presence of others interfering proteins such as bovine OPN and bovine serum albumin. The results gathered in the current work suggest that the proposed electrochemical aptasensor is a simple and sensitive detection tool for human OPN and so, may have future applications in cancer disease monitoring.Keywords: osteopontin, aptamer, aptasensor, screen-printed electrode, cyclic voltammetry
Procedia PDF Downloads 4361650 Brain Stem Posterior Reversible Encephalopathy Syndrome in Nephrotic Syndrome
Authors: S. H. Jang
Abstract:
Posterior reversible encephalopathy syndrome (PRES) is characterized by acute neurologic symptoms (visual loss, headache, altered mentality and seizures) and by typical imaging findings (bilateral subcortical and cortical edema with predominatly posterior distribution). Nephrotic syndrome is a syndrome comprising signs of proteinuria, hypoalbuminemia, and edema. It is well known that hypertension predispose patient with nephrotic syndrome to PRES. A 45-year old male was referred for suddenly developed vertigo, disequilibrium. He had previous history of nephrotic syndrome. His medical history included diabetes controlled with medication. He was hospitalized because of generalized edema a few days ago. His vital signs were stable. On neurologic examination, his mental state was alert. Horizontal nystagmus to right side on return to primary position was observed. He showed good grade motor weakness and ataxia in right upper and lower limbs without other sensory abnormality. Brain MRI showed increased signal intensity in FLAIR image, decreased signal intensity in T1 image and focal enhanced lesion in T1 contrast image at whole midbrain, pons and cerebellar peduncle symmetrically, which was compatible with vasogenic edema. Laboratory findings showed severe proteinuria and hypoalbuminemia. He was given intravenous dexamethasone and diuretics to reduce vasogenic edema and raise the intra-vascular osmotic pressure. Nystagmus, motor weakness and limb ataxia improved gradually over 2 weeks; He recovered without any neurologic symptom and sign. Follow-up MRI showed decreased vasogenic edema fairly. We report a case of brain stem PRES in normotensive, nephrotic syndrome patient.Keywords: posterior reversible encephalopathy syndrome, MRI, nephrotic syndrome, vasogenic brain edema
Procedia PDF Downloads 2781649 A Novel Stress Instability Workability Criteria for Internal Ductile Failure in Steel Cold Heading Process
Authors: Amar Sabih, James Nemes
Abstract:
The occurrence of internal ductile failure within the Adiabatic Shear Band (ASB) in cold-headed products presents a significant barrier in the fast-expanding cold-heading (CH) industry. The presence of internal ductile failure in cold-headed products may lead to catastrophic fracture under tensile loads despite the ductile nature of the material causing expensive industrial recalls. Therefore, this paper presents a workability criterion that uses stress instability as an indicator to accurately reveal the locus of initiation of internal ductile failures. The concept of the instability criterion is to use the stress ratio at failure as a weighting function to indicate the initiation of ductile failure inside the ASBs. This paper presents a comprehensive experimental, metallurgical, and finite element simulation study to calculate the material constants used in this criterion.Keywords: adiabatic shear band, workability criterion, ductile failure, stress instability
Procedia PDF Downloads 941648 Signal Processing of the Blood Pressure and Characterization
Authors: Hadj Abd El Kader Benghenia, Fethi Bereksi Reguig
Abstract:
In clinical medicine, blood pressure, raised blood hemodynamic monitoring is rich pathophysiological information of cardiovascular system, of course described through factors such as: blood volume, arterial compliance and peripheral resistance. In this work, we are interested in analyzing these signals to propose a detection algorithm to delineate the different sequences and especially systolic blood pressure (SBP), diastolic blood pressure (DBP), and the wave and dicrotic to do their analysis in order to extract the cardiovascular parameters.Keywords: blood pressure, SBP, DBP, detection algorithm
Procedia PDF Downloads 4421647 Performance Evaluation of a Very High-Resolution Satellite Telescope
Authors: Walid A. Attia, Taher M. Bazan, Fawzy Eltohamy, Mahmoud Fathy
Abstract:
System performance evaluation is an essential stage in the design of high-resolution satellite telescopes prior to the development process. In this paper, a system performance evaluation of a very high-resolution satellite telescope is investigated. The evaluated system has a Korsch optical scheme design. This design has been discussed in another paper with respect to three-mirror anastigmat (TMA) scheme design and the former configuration showed better results. The investigated system is based on the Korsch optical design integrated with a time-delay and integration charge coupled device (TDI-CCD) sensor to achieve a ground sampling distance (GSD) of 25 cm. The key performance metrics considered are the spatial resolution, the signal to noise ratio (SNR) and the total modulation transfer function (MTF) of the system. In addition, the national image interpretability rating scale (NIIRS) metric is assessed to predict the image quality according to the modified general image quality equation (GIQE). Based on the orbital, optical and detector parameters, the estimated GSD is found to be 25 cm. The SNR has been analyzed at different illumination conditions of target albedos, sun and sensor angles. The system MTF has been computed including diffraction, aberration, optical manufacturing, smear and detector sampling as the main contributors for evaluation the MTF. Finally, the system performance evaluation results show that the computed MTF value is found to be around 0.08 at the Nyquist frequency, the SNR value was found to be 130 at albedo 0.2 with a nadir viewing angles and the predicted NIIRS is in the order of 6.5 which implies a very good system image quality.Keywords: modulation transfer function, national image interpretability rating scale, signal to noise ratio, satellite telescope performance evaluation
Procedia PDF Downloads 3871646 A New Method to Estimate the Low Income Proportion: Monte Carlo Simulations
Authors: Encarnación Álvarez, Rosa M. García-Fernández, Juan F. Muñoz
Abstract:
Estimation of a proportion has many applications in economics and social studies. A common application is the estimation of the low income proportion, which gives the proportion of people classified as poor into a population. In this paper, we present this poverty indicator and propose to use the logistic regression estimator for the problem of estimating the low income proportion. Various sampling designs are presented. Assuming a real data set obtained from the European Survey on Income and Living Conditions, Monte Carlo simulation studies are carried out to analyze the empirical performance of the logistic regression estimator under the various sampling designs considered in this paper. Results derived from Monte Carlo simulation studies indicate that the logistic regression estimator can be more accurate than the customary estimator under the various sampling designs considered in this paper. The stratified sampling design can also provide more accurate results.Keywords: poverty line, risk of poverty, auxiliary variable, ratio method
Procedia PDF Downloads 4601645 X-Ray Detector Technology Optimization in Computed Tomography
Authors: Aziz Ikhlef
Abstract:
Most of multi-slices Computed Tomography (CT) scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This is translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80 kVp and 140 kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts
Procedia PDF Downloads 1961644 Discrete State Prediction Algorithm Design with Self Performance Enhancement Capacity
Authors: Smail Tigani, Mohamed Ouzzif
Abstract:
This work presents a discrete quantitative state prediction algorithm with intelligent behavior making it able to self-improve some performance aspects. The specificity of this algorithm is the capacity of self-rectification of the prediction strategy before the final decision. The auto-rectification mechanism is based on two parallel mathematical models. In one hand, the algorithm predicts the next state based on event transition matrix updated after each observation. In the other hand, the algorithm extracts its residues trend with a linear regression representing historical residues data-points in order to rectify the first decision if needs. For a normal distribution, the interactivity between the two models allows the algorithm to self-optimize its performance and then make better prediction. Designed key performance indicator, computed during a Monte Carlo simulation, shows the advantages of the proposed approach compared with traditional one.Keywords: discrete state, Markov Chains, linear regression, auto-adaptive systems, decision making, Monte Carlo Simulation
Procedia PDF Downloads 5001643 Is Materiality Determination the Key to Integrating Corporate Sustainability and Maximising Value?
Authors: Ruth Hegarty, Noel Connaughton
Abstract:
Sustainability reporting has become a priority for many global multinational companies. This is associated with ever-increasing expectations from key stakeholders for companies to be transparent about their strategies, activities and management with regard to sustainability issues. The Global Reporting Initiative (GRI) encourages reporters to only provide information on the issues that are really critical in order to achieve the organisation’s goals for sustainability and manage its impact on environment and society. A key challenge for most reporting organisations is how to identify relevant issues for sustainability reporting and prioritise those material issues in accordance with company and stakeholder needs. A recent study indicates that most of the largest companies listed on the world’s stock exchanges are failing to provide data on key sustainability indicators such as employee turnover, energy, greenhouse gas emissions (GHGs), injury rate, pay equity, waste and water. This paper takes an indepth look at the approaches used by a select number of international sustainability leader corporates to identify key sustainability issues. The research methodology involves performing a detailed analysis of the sustainability report content of up to 50 companies listed on the 2014 Dow Jones Sustainability Indices (DJSI). The most recent sustainability report content found on the GRI Sustainability Disclosure Database is then compared with 91 GRI Specific Standard Disclosures and a small number of GRI Standard Disclosures. Preliminary research indicates significant gaps in the information disclosed in corporate sustainability reports versus the indicator content specified in the GRI Content Index. The following outlines some of the key findings to date: Most companies made a partial disclosure with regard to the Economic indicators of climate change risks and infrastructure investments, but did not focus on the associated negative impacts. The top Environmental indicators disclosed were energy consumption and reductions, GHG emissions, water withdrawals, waste and compliance. The lowest rates of indicator disclosure included biodiversity, water discharge, mitigation of environmental impacts of products and services, transport, environmental investments, screening of new suppliers and supply chain impacts. The top Social indicators disclosed were new employee hires, rates of injury, freedom of association in operations, child labour and forced labour. Lesser disclosure rates were reported for employee training, composition of governance bodies and employees, political contributions, corruption and fines for non-compliance. The reporting on most other Social indicators was found to be poor. In addition, most companies give only a brief explanation on how material issues are defined, identified and ranked. Data on the identification of key stakeholders and the degree and nature of engagement for determining issues and their weightings is also lacking. Generally, little to no data is provided on the algorithms used to score an issue. Research indicates that most companies lack a rigorous and thorough methodology to systematically determine the material issues of sustainability reporting in accordance with company and stakeholder needs.Keywords: identification of key stakeholders, material issues, sustainability reporting, transparency
Procedia PDF Downloads 3091642 Developing a HSE-Finacial Indicator Model in Oil Industry
Authors: Reza Safari, Ali Rajabzadeh Ghatari, Raheleh Hossseinzadeh Mahabadi
Abstract:
In the present world, there are different pressures on firms such as competition, legislations, social etc. these pressures force the firms to follow “survival” as their primary goal and then growth. One of the main factors that helps firms to reach their goals is proper financial performance. To find out about the financial performance, a firm should monitors its financial performance. Financial performance affected by many factors. This research seeks to clear which financial performance indicators are most important according to Environmental situation of a firm and what are their priorities. To do so, environmental indicators specified as presented on OECD Key Environmental Indicators 2008 and so the financial performance indicators such as Profitability, Liquidity, Gearing, Investor ratios, and etc. At this stage, the affections questioned through questionnaires. After gaining the results, data analyzed using Promethee technique. By using decision matrixes extracted from those techniques an expert system designed. This expert system suggests the suitable financial performance indicators and their ranking by receiving the environment situation given environment indicators weight.Keywords: environment indicators, financial performance indicators, promethee, expert system
Procedia PDF Downloads 4481641 Non-thermal Plasma Promotes Boar Sperm Quality Through Increasing AMPK Methylation
Authors: Jiaojiao Zhang
Abstract:
Boar sperm quality, as an important indicator of reproductive efficiency, directly affects the efficiency of livestock production. Here, this study was conducted to improve the boar sperm quality by using a non-thermal dielectric barrier discharge (DBD) plasma. Our results showed that DBD plasma exposure at 2.1 W for 15 s could improve boar sperm quality by increasing the exon methylation level of adenosine monophosphate-activated protein kinase (AMPK) and thus improving the glycolytic flux, mitochondrial function, and antioxidant capacity without damaging the integrity of sperm DNA and acrosome. In addition, DBD plasma could rescue DNA methyltransferase inhibitor decitabine-caused low sperm quality by reducing oxidative stress and mitochondrial damage. Therefore, the application of non-thermal plasma provides a new strategy for reducing sperm oxidative damage and improving sperm quality, which shows great potential in assisted reproduction to solve the problem of male infertility.Keywords: non-thermal DBD plasma, sperm quality, AMPK methylation, energy metabolism, antioxidant capacity
Procedia PDF Downloads 191640 The Determination of Operating Reserve in Small Power Systems Based on Reliability Criteria
Authors: H. Falsafi Falsafizadeh, R. Zeinali Zeinali
Abstract:
This paper focuses on determination of total Operating Reserve (OR) level, consisting of spinning and non-spinning reserves, in two small real power systems, in such a way that the system reliability indicator would comply with typical industry standards. For this purpose, the standard used by the North American Electric Reliability Corporation (NERC) – i.e., 1 day outage in 10 years or 0.1 days/year is relied. The simulation of system operation for these systems that was used for the determination of total operating reserve level was performed by industry standard production simulation software in this field, named PLEXOS. In this paper, the operating reserve which meets an annual Loss of Load Expectation (LOLE) of approximately 0.1 days per year is determined in the study year. This reserve is the minimum amount of reserve required in a power system and generally defined as a percentage of the annual peak.Keywords: frequency control, LOLE, operating reserve, system reliability
Procedia PDF Downloads 3471639 CdS Quantum Dots as Fluorescent Probes for Detection of Naphthalene
Authors: Zhengyu Yan, Yan Yu, Jianqiu Chen
Abstract:
A novel sensing system has been designed for naphthalene detection based on the quenched fluorescence signal of CdS quantum dots. The fluorescence intensity of the system reduced significantly after adding CdS quantum dots to the water pollution model because of the fluorescent static quenching f mechanism. Herein, we have demonstrated the facile methodology can offer a convenient and low analysis cost with the recovery rate as 97.43%-103.2%, which has potential application prospect.Keywords: CdS quantum dots, modification, detection, naphthalene
Procedia PDF Downloads 4951638 Ibrutinib and the Potential Risk of Cardiac Failure: A Review of Pharmacovigilance Data
Authors: Abdulaziz Alakeel, Roaa Alamri, Abdulrahman Alomair, Mohammed Fouda
Abstract:
Introduction: Ibrutinib is a selective, potent, and irreversible small-molecule inhibitor of Bruton's tyrosine kinase (BTK). It forms a covalent bond with a cysteine residue (CYS-481) at the active site of Btk, leading to inhibition of Btk enzymatic activity. The drug is indicated to treat certain type of cancers such as mantle cell lymphoma (MCL), chronic lymphocytic leukaemia and Waldenström's macroglobulinaemia (WM). Cardiac failure is a condition referred to inability of heart muscle to pump adequate blood to human body organs. There are multiple types of cardiac failure including left and right-sided heart failure, systolic and diastolic heart failures. The aim of this review is to evaluate the risk of cardiac failure associated with the use of ibrutinib and to suggest regulatory recommendations if required. Methodology: Signal Detection team at the National Pharmacovigilance Center (NPC) of Saudi Food and Drug Authority (SFDA) performed a comprehensive signal review using its national database as well as the World Health Organization (WHO) database (VigiBase), to retrieve related information for assessing the causality between cardiac failure and ibrutinib. We used the WHO- Uppsala Monitoring Centre (UMC) criteria as standard for assessing the causality of the reported cases. Results: Case Review: The number of resulted cases for the combined drug/adverse drug reaction are 212 global ICSRs as of July 2020. The reviewers have selected and assessed the causality for the well-documented ICSRs with completeness scores of 0.9 and above (35 ICSRs); the value 1.0 presents the highest score for best-written ICSRs. Among the reviewed cases, more than half of them provides supportive association (four probable and 15 possible cases). Data Mining: The disproportionality of the observed and the expected reporting rate for drug/adverse drug reaction pair is estimated using information component (IC), a tool developed by WHO-UMC to measure the reporting ratio. Positive IC reflects higher statistical association while negative values indicates less statistical association, considering the null value equal to zero. The results of (IC=1.5) revealed a positive statistical association for the drug/ADR combination, which means “Ibrutinib” with “Cardiac Failure” have been observed more than expected when compared to other medications available in WHO database. Conclusion: Health regulators and health care professionals must be aware for the potential risk of cardiac failure associated with ibrutinib and the monitoring of any signs or symptoms in treated patients is essential. The weighted cumulative evidences identified from causality assessment of the reported cases and data mining are sufficient to support a causal association between ibrutinib and cardiac failure.Keywords: cardiac failure, drug safety, ibrutinib, pharmacovigilance, signal detection
Procedia PDF Downloads 1351637 An Odyssey to Sustainability: The Urban Archipelago of India
Authors: B. Sudhakara Reddy
Abstract:
This study provides a snapshot of the sustainability of selected Indian cities by employing 70 indicators in four dimensions to develop an overall city sustainability index. In recent years, the concept of ‘urban sustainability’ has become prominent due to its complexity. Urban areas propel growth and at the same time poses a lot of ecological, social and infrastructural problems and risks. In case of developing countries, the high population density of and the continuous in-migration run the highest risk in natural and man-made disasters. These issues combined with the inability of policy makers in providing basic services makes the cities unsustainable. To assess whether any given policy is moving towards or against urban sustainability it is necessary to consider the relationships among its various dimensions. Hence, in recent years, while preparing the sustainability index, an integral approach involving indicators of different dimensions such as ‘economic’, ‘environmental’ and 'social' is being used. It is also important for urban planners, social analysts and other related institutions to identify and understand the relationships in this complex system. The objective of the paper is to develop a city performance index (CPI) to measure and evaluate the urban regions in terms of sustainable performances. The objectives include: i) Objective assessment of a city’s performance, ii) setting achievable goals iii) prioritise relevant indicators for improvement, iv) learning from leaders, iv) assessment of the effectiveness of programmes that results in achieving high indicator values, v) Strengthening of stakeholder participation. Using the benchmark approach, a conceptual framework is developed for evaluating 25 Indian cities. We develop City Sustainability index (CSI) in order to rank cities according to their level of sustainability. The CSI is composed of four dimensions: Economic, Environment, Social, and Institutional. Each dimension is further composed of multiple indicators: (1) Economic that considers growth, access to electricity, and telephone availability; (2) environmental that includes waste water treatment, carbon emissions, (3) social that includes, equity, infant mortality, and 4) institutional that includes, voting share of population, urban regeneration policies. The CSI, consisting of four dimensions disaggregate into 12 categories and ultimately into 70 indicators. The data are obtained from public and non-governmental organizations, and also from city officials and experts. By ranking a sample of diverse cities on a set of specific dimensions the study can serve as a baseline of current conditions and a marker for referencing future results. The benchmarks and indices presented in the study provide a unique resource for the government and the city authorities to learn about the positive and negative attributes of a city and prepare plans for a sustainable urban development. As a result of our conceptual framework, the set of criteria we suggest is somewhat different to any already in the literature. The scope of our analysis is intended to be broad. Although illustrated with specific examples, it should be apparent that the principles identified are relevant to any monitoring that is used to inform decisions involving decision variables. These indicators are policy-relevant and, hence they are useful tool for decision-makers and researchers.Keywords: benchmark, city, indicator, performance, sustainability
Procedia PDF Downloads 2721636 C2N2 Adsorption on the Surface of a BN Nanosheet: A DFT Study
Authors: Maziar Noei
Abstract:
Calculation showed that when the nanosheet is doped by Si, the adsorption energy is about -85.62 to -87.43kcal/mol and also the amount of HOMO/LUMO energy gap (Eg) will reduce significantly. Boron nitride nanosheet is a suitable adsorbent for cyanogen and can be used in separation processes cyanogen. It seems that nanosheet (BNNS) is a suitable semiconductor after doping. The doped BNNS in the presence of cyanogens (C2N2) an electrical signal is generating directly and, therefore, can potentially be used for cyanogen sensors.Keywords: nanosheet, DFT, cyanogen, sensors
Procedia PDF Downloads 2861635 Assessing Relationships between Glandularity and Gray Level by Using Breast Phantoms
Authors: Yun-Xuan Tang, Pei-Yuan Liu, Kun-Mu Lu, Min-Tsung Tseng, Liang-Kuang Chen, Yuh-Feng Tsai, Ching-Wen Lee, Jay Wu
Abstract:
Breast cancer is predominant of malignant tumors in females. The increase in the glandular density increases the risk of breast cancer. BI-RADS is a frequently used density indicator in mammography; however, it significantly overestimates the glandularity. Therefore, it is very important to accurately and quantitatively assess the glandularity by mammography. In this study, 20%, 30% and 50% glandularity phantoms were exposed using a mammography machine at 28, 30 and 31 kVp, and 30, 55, 80 and 105 mAs, respectively. The regions of interest (ROIs) were drawn to assess the gray level. The relationship between the glandularity and gray level under various compression thicknesses, kVp, and mAs was established by the multivariable linear regression. A phantom verification was performed with automatic exposure control (AEC). The regression equation was obtained with an R-square value of 0.928. The average gray levels of the verification phantom were 8708, 8660 and 8434 for 0.952, 0.963 and 0.985 g/cm3, respectively. The percent differences of glandularity to the regression equation were 3.24%, 2.75% and 13.7%. We concluded that the proposed method could be clinically applied in mammography to improve the glandularity estimation and further increase the importance of breast cancer screening.Keywords: mammography, glandularity, gray value, BI-RADS
Procedia PDF Downloads 4971634 Effect of Chain Length on Skeletonema pseudocostatum as Probed by THz Spectroscopy
Authors: Ruqyyah Mushtaq, Chiacar Gamberdella, Roberta Miroglio, Fabio Novelli, Domenica Papro, M. Paturzo, A. Rubano, Angela Sardo
Abstract:
Microalgae, particularly diatoms, are well suited for monitoring environmental health, especially in assessing the quality of seas and rivers in terms of organic matter, nutrients, and heavy metal pollution. They respond rapidly to changes in habitat quality. In this study, we focused on Skeletonema pseudocostatum, a unicellular alga that forms chains depending on environmental conditions. Specifically, we explored whether metal toxicants could affect the growth of these algal chains, potentially serving as an ecotoxicological indicator of heavy metal pollution. We utilized THz spectroscopy in conjunction with standard optical microscopy to observe the formation of these chains and their response to toxicants. Despite the strong absorption of terahertz radiation in water, we demonstrate that changes in water absorption in the terahertz range due to water-diatom interaction can provide insights into diatom chain length.Keywords: THz-TDS spectroscopy, diatoms, marine ecotoxicology, marine pollution
Procedia PDF Downloads 361633 Normalized P-Laplacian: From Stochastic Game to Image Processing
Authors: Abderrahim Elmoataz
Abstract:
More and more contemporary applications involve data in the form of functions defined on irregular and topologically complicated domains (images, meshs, points clouds, networks, etc). Such data are not organized as familiar digital signals and images sampled on regular lattices. However, they can be conveniently represented as graphs where each vertex represents measured data and each edge represents a relationship (connectivity or certain affinities or interaction) between two vertices. Processing and analyzing these types of data is a major challenge for both image and machine learning communities. Hence, it is very important to transfer to graphs and networks many of the mathematical tools which were initially developed on usual Euclidean spaces and proven to be efficient for many inverse problems and applications dealing with usual image and signal domains. Historically, the main tools for the study of graphs or networks come from combinatorial and graph theory. In recent years there has been an increasing interest in the investigation of one of the major mathematical tools for signal and image analysis, which are Partial Differential Equations (PDEs) variational methods on graphs. The normalized p-laplacian operator has been recently introduced to model a stochastic game called tug-of-war-game with noise. Part interest of this class of operators arises from the fact that it includes, as particular case, the infinity Laplacian, the mean curvature operator and the traditionnal Laplacian operators which was extensiveley used to models and to solve problems in image processing. The purpose of this paper is to introduce and to study a new class of normalized p-Laplacian on graphs. The introduction is based on the extension of p-harmonious function introduced in as discrete approximation for both infinity Laplacian and p-Laplacian equations. Finally, we propose to use these operators as a framework for solving many inverse problems in image processing.Keywords: normalized p-laplacian, image processing, stochastic game, inverse problems
Procedia PDF Downloads 5151632 Taguchi Robust Design for Optimal Setting of Process Wastes Parameters in an Automotive Parts Manufacturing Company
Authors: Charles Chikwendu Okpala, Christopher Chukwutoo Ihueze
Abstract:
As a technique that reduces variation in a product by lessening the sensitivity of the design to sources of variation, rather than by controlling their sources, Taguchi Robust Design entails the designing of ideal goods, by developing a product that has minimal variance in its characteristics and also meets the desired exact performance. This paper examined the concept of the manufacturing approach and its application to brake pad product of an automotive parts manufacturing company. Although the firm claimed that only defects, excess inventory, and over-production were the few wastes that grossly affect their productivity and profitability, a careful study and analysis of their manufacturing processes with the application of Single Minute Exchange of Dies (SMED) tool showed that the waste of waiting is the fourth waste that bedevils the firm. The selection of the Taguchi L9 orthogonal array which is based on the four parameters and the three levels of variation for each parameter revealed that with a range of 2.17, that waiting is the major waste that the company must reduce in order to continue to be viable. Also, to enhance the company’s throughput and profitability, the wastes of over-production, excess inventory, and defects with ranges of 2.01, 1.46, and 0.82, ranking second, third, and fourth respectively must also be reduced to the barest minimum. After proposing -33.84 as the highest optimum Signal-to-Noise ratio to be maintained for the waste of waiting, the paper advocated for the adoption of all the tools and techniques of Lean Production System (LPS), and Continuous Improvement (CI), and concluded by recommending SMED in order to drastically reduce set up time which leads to unnecessary waiting.Keywords: lean production system, single minute exchange of dies, signal to noise ratio, Taguchi robust design, waste
Procedia PDF Downloads 129