Search results for: cumulative scaling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 712

Search results for: cumulative scaling

82 Evidence-Based Policy Making to Improve Human Security in Pakistan

Authors: Ayesha Akbar

Abstract:

Pakistan is moving from a security state to a welfare state despite several security challenges both internal and external. Human security signifies a varied approach in different regions depending upon the leadership and policy priorities. The link between human development and economic growth is not automatic. It has to be created consciously by forward-looking policies and strategies by national governments. There are seven components or categories of human security these include: Economic Security, Personal Security, Health Security, Environmental Security, Food Security, Community Security and Political Security. The increasing interest of the international community to clearly understand the dimensions of human security provided the grounds to Pakistani scholars as well to ponder on the issue and delineate lines of human security. A great deal of work has been either done or in process to evaluate human security indicators in Pakistan. Notwithstanding, after having been done a great deal of work the human security in Pakistan is not satisfactory. A range of deteriorating indicators of human development that lies under the domain of human security leaves certain inquiries to be answered. What are the dimensions of human security in Pakistan? And how are they being dealt from the perspective of policy and institution in terms of its operationalization in Pakistan? Is the human security discourse reflects evidence-based policy changes. The methodology is broadly based on qualitative methods that include interviews, content analysis of policy documents. Pakistan is among the most populous countries in the world and faces high vulnerability to climate change. Literacy rate has gone down with the surge of youth bulge to accommodate in the job market. Increasing population is creating food problems as the resources have not been able to compete with the raising demands of food and other social amenities of life. Majority of the people are facing acute poverty. Health outcomes are also not satisfactory with the high infant and maternal mortality rate. Pakistan is on the verge of facing water crisis as the water resources are depleting so fast with the high demand in agriculture and energy sector. Pakistan is striving hard to deal with the declining state of human security but the dilemma is lack of resources that hinders in meeting up with the emerging demands. The government requires to bring about more change with scaling-up economic growth avenues with enhancing the capacity of human resources. A modern performance drive culture with the integration of technology is required to deliver efficient and effective service delivery. On an already fast track process of reforms; e-governance and evidence based policy mechanism is being instilled in the government process for better governance and evidence based decisions.

Keywords: governance, human development index, human security, Pakistan, policy

Procedia PDF Downloads 226
81 Clinical Validation of C-PDR Methodology for Accurate Non-Invasive Detection of Helicobacter pylori Infection

Authors: Suman Som, Abhijit Maity, Sunil B. Daschakraborty, Sujit Chaudhuri, Manik Pradhan

Abstract:

Background: Helicobacter pylori is a common and important human pathogen and the primary cause of peptic ulcer disease and gastric cancer. Currently H. pylori infection is detected by both invasive and non-invasive way but the diagnostic accuracy is not up to the mark. Aim: To set up an optimal diagnostic cut-off value of 13C-Urea Breath Test to detect H. pylori infection and evaluate a novel c-PDR methodology to overcome of inconclusive grey zone. Materials and Methods: All 83 subjects first underwent upper-gastrointestinal endoscopy followed by rapid urease test and histopathology and depending on these results; we classified 49 subjects as H. pylori positive and 34 negative. After an overnight, fast patients are taken 4 gm of citric acid in 200 ml water solution and 10 minute after ingestion of the test meal, a baseline exhaled breath sample was collected. Thereafter an oral dose of 75 mg 13C-Urea dissolved in 50 ml water was given and breath samples were collected upto 90 minute for 15 minute intervals and analysed by laser based high precisional cavity enhanced spectroscopy. Results: We studied the excretion kinetics of 13C isotope enrichment (expressed as δDOB13C ‰) of exhaled breath samples and found maximum enrichment around 30 minute of H. pylori positive patients, it is due to the acid mediated stimulated urease enzyme activity and maximum acidification happened within 30 minute but no such significant isotopic enrichment observed for H. pylori negative individuals. Using Receiver Operating Characteristic (ROC) curve an optimal diagnostic cut-off value, δDOB13C ‰ = 3.14 was determined at 30 minute exhibiting 89.16% accuracy. Now to overcome grey zone problem we explore percentage dose of 13C recovered per hour, i.e. 13C-PDR (%/hr) and cumulative percentage dose of 13C recovered, i.e. c-PDR (%) in exhaled breath samples for the present 13C-UBT. We further explored the diagnostic accuracy of 13C-UBT by constructing ROC curve using c-PDR (%) values and an optimal cut-off value was estimated to be c-PDR = 1.47 (%) at 60 minute, exhibiting 100 % diagnostic sensitivity , 100 % specificity and 100 % accuracy of 13C-UBT for detection of H. pylori infection. We also elucidate the gastric emptying process of present 13C-UBT for H. pylori positive patients. The maximal emptying rate found at 36 minute and half empting time of present 13C-UBT was found at 45 minute. Conclusions: The present study exhibiting the importance of c-PDR methodology to overcome of grey zone problem in 13C-UBT for accurate determination of infection without any risk of diagnostic errors and making it sufficiently robust and novel method for an accurate and fast non-invasive diagnosis of H. pylori infection for large scale screening purposes.

Keywords: 13C-Urea breath test, c-PDR methodology, grey zone, Helicobacter pylori

Procedia PDF Downloads 280
80 Queer Anti-Urbanism: An Exploration of Queer Space Through Design

Authors: William Creighton, Jan Smitheram

Abstract:

Queer discourse has been tied to a middle-class, urban-centric, white approach to the discussion of queerness. In doing so, the multilayeredness of queer existence has been washed away in favour of palatable queer occupation. This paper uses design to explore a queer anti-urbanist approach to facilitate a more egalitarian architectural occupancy. Scott Herring’s work on queer anti-urbanism is key to this approach. Herring redeploys anti-urbanism from its historical understanding of open hostility, rejection and desire to destroy the city towards a mode of queer critique that counters normative ideals of homonormative metronormative gay lifestyles. He questions how queer identity has been closed down into a more diminutive frame where those who do not fit within this frame are subjected to persecution or silenced through their absence. We extend these ideas through design to ask how a queer anti-urbanist approach facilitates a more egalitarian architectural occupancy. Following a “design as research” methodology, the design outputs allow a vehicle to ask how we might live, otherwise, in architectural space. A design as research methodologically is a process of questioning, designing and reflecting – in a non-linear, iterative approach – establishes itself through three projects, each increasing in scale and complexity. Each of the three scales tackled a different body relationship. The project began exploring the relations between body to body, body to known others, and body to unknown others. Moving through increasing scales was not to privilege the objective, the public and the large scale; instead, ‘intra-scaling’ acts as a tool to re-think how scale reproduces normative ideas of the identity of space. There was a queering of scale. Through this approach, the results were an installation that brings two people together to co-author space where the installation distorts the sensory experience and forces a more intimate and interconnected experience challenging our socialized proxemics: knees might touch. To queer the home, the installation was used as a drawing device, a tool to study and challenge spatial perception, drawing convention, and as a way to process practical information about the site and existing house – the device became a tool to embrace the spontaneous. The final design proposal operates as a multi-scalar boundary-crossing through “private” and “public” to support kinship through communal labour, queer relationality and mooring. The resulting design works to set adrift bodies in a sea of sensations through a mix of pleasure programmes. To conclude, through three design proposals, this design research creates a relationship between queer anti-urbanism and design. It asserts that queering the design process and outcome allows a more inclusive way to consider place, space and belonging. The projects lend to a queer relationality and interdependence by making spaces that support the unsettled, out-of-place, but is it queer enough?

Keywords: queer, queer anti-urbanism, design as research, design

Procedia PDF Downloads 139
79 Application of a Submerged Anaerobic Osmotic Membrane Bioreactor Hybrid System for High-Strength Wastewater Treatment and Phosphorus Recovery

Authors: Ming-Yeh Lu, Shiao-Shing Chen, Saikat Sinha Ray, Hung-Te Hsu

Abstract:

Recently, anaerobic membrane bioreactors (AnMBRs) has been widely utilized, which combines anaerobic biological treatment process and membrane filtration, that can be present an attractive option for wastewater treatment and water reuse. Conventional AnMBR is having several advantages, such as improving effluent quality, compact space usage, lower sludge yield, without aeration and production of energy. However, the removal of nitrogen and phosphorus in the AnMBR permeate was negligible which become the biggest disadvantage. In recent years, forward osmosis (FO) is an emerging technology that utilizes osmotic pressure as driving force to extract clean water without additional external pressure. The pore size of FO membrane is kindly mentioned the pore size, so nitrogen or phosphorus could effectively improve removal of nitrogen or phosphorus. Anaerobic bioreactor with FO membrane (AnOMBR) can retain the concentrate organic matters and nutrients. However, phosphorus is a non-renewable resource. Due to the high rejection property of FO membrane, the high amount of phosphorus could be recovered from the combination of AnMBR and FO. In this study, development of novel submerged anaerobic osmotic membrane bioreactor integrated with periodic microfiltration (MF) extraction for simultaneous phosphorus and clean water recovery from wastewater was evaluated. A laboratory-scale AnOMBR utilizes cellulose triacetate (CTA) membranes with effective membrane area of 130 cm² was fully submerged into a 5.5 L bioreactor at 30-35℃. Active layer-facing feed stream orientation was utilized, for minimizing fouling and scaling. Additionally, a peristaltic pump was used to circulate draw solution (DS) at a cross flow velocity of 0.7 cm/s. Magnesium sulphate (MgSO₄) solution was used as DS. Microfiltration membrane periodically extracted about 1 L solution when the TDS reaches to 5 g/L to recover phosphorus and simultaneous control the salt accumulation in the bioreactor. During experiment progressed, the average water flux was achieved around 1.6 LMH. The AnOMBR process show greater than 95% removal of soluble chemical oxygen demand (sCOD), nearly 100% of total phosphorous whereas only partial removal of ammonia, and finally average methane production of 0.22 L/g sCOD was obtained. Therefore, AnOMBR system periodically utilizes MF membrane extracted for phosphorus recovery with simultaneous pH adjustment. The overall performance demonstrates that a novel submerged AnOMBR system is having potential for simultaneous wastewater treatment and resource recovery from wastewater, and hence, the new concept of this system can be used to replace for conventional AnMBR in the future.

Keywords: anaerobic treatment, forward osmosis, phosphorus recovery, membrane bioreactor

Procedia PDF Downloads 237
78 Machine Learning Techniques in Seismic Risk Assessment of Structures

Authors: Farid Khosravikia, Patricia Clayton

Abstract:

The main objective of this work is to evaluate the advantages and disadvantages of various machine learning techniques in two key steps of seismic hazard and risk assessment of different types of structures. The first step is the development of ground-motion models, which are used for forecasting ground-motion intensity measures (IM) given source characteristics, source-to-site distance, and local site condition for future events. IMs such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available. Second, it is investigated how machine learning techniques could be beneficial for developing probabilistic seismic demand models (PSDMs), which provide the relationship between the structural demand responses (e.g., component deformations, accelerations, internal forces, etc.) and the ground motion IMs. In the risk framework, such models are used to develop fragility curves estimating exceeding probability of damage for pre-defined limit states, and therefore, control the reliability of the predictions in the risk assessment. In this study, machine learning algorithms like artificial neural network, random forest, and support vector machine are adopted and trained on the demand parameters to derive PSDMs for them. It is observed that such models can provide more accurate estimates of prediction in relatively shorter about of time compared to conventional methods. Moreover, they can be used for sensitivity analysis of fragility curves with respect to many modeling parameters without necessarily requiring more intense numerical response-history analysis.

Keywords: artificial neural network, machine learning, random forest, seismic risk analysis, seismic hazard analysis, support vector machine

Procedia PDF Downloads 83
77 Advancing Food System Resilience by Pseudocereals Utilization

Authors: Yevheniia Varyvoda, Douglas Taren

Abstract:

At the aggregate level, climate variability, the rising number of active violent conflicts, globalization and industrialization of agriculture, the loss in diversity of crop species, the increase in demand for agricultural production, and the adoption of healthy and sustainable dietary patterns are exacerbating factors of food system destabilization. The importance of pseudocereals to fuel and sustain resilient food systems is recognized by leading organizations working to end hunger, particularly for their critical capability to diversify livelihood portfolios and provide plant-sourced healthy nutrition in the face of systemic shocks and stresses. Amaranth, buckwheat, and quinoa are the most promising and used pseudocereals for ensuring food system resilience in the reality of climate change due to their high nutritional profile, good digestibility, palatability, medicinal value, abiotic stress tolerance, pest and disease resistance, rapid growth rate, adaptability to marginal and degraded lands, high genetic variability, low input requirements, and income generation capacity. The study provides the rationale and examples of advancing local and regional food systems' resilience by scaling up the utilization of amaranth, buckwheat, and quinoa along all components of food systems to architect indirect nutrition interventions and climate-smart approaches. Thus, this study aims to explore the drivers for ancient pseudocereal utilization, the potential resilience benefits that can be derived from using them, and the challenges and opportunities for pseudocereal utilization within the food system components. The PSALSAR framework regarding the method for conducting systematic review and meta-analysis for environmental science research was used to answer these research questions. Nevertheless, the utilization of pseudocereals has been slow for a number of reasons, namely the increased production of commercial and major staples such as maize, rice, wheat, soybean, and potato, the displacement due to pressure from imported crops, lack of knowledge about value-adding practices in food supply chain, limited technical knowledge and awareness about nutritional and health benefits, absence of marketing channels and limited access to extension services and information about resilient crops. The success of climate-resilient pathways based on pseudocereal utilization underlines the importance of co-designed activities that use modern technologies, high-value traditional knowledge of underutilized crops, and a strong acknowledgment of cultural norms to increase community-level economic and food system resilience.

Keywords: resilience, pseudocereals, food system, climate change

Procedia PDF Downloads 57
76 Mechanical Properties and Antibiotic Release Characteristics of Poly(methyl methacrylate)-based Bone Cement Formulated with Mesoporous Silica Nanoparticles

Authors: Kumaran Letchmanan, Shou-Cang Shen, Wai Kiong Ng

Abstract:

Postoperative implant-associated infections in soft tissues and bones remain a serious complication in orthopaedic surgery, which leads to impaired healing, re-implantation, prolong hospital stay and increase cost. Drug-loaded implants with sustained release of antibiotics at the local site are current research interest to reduce the risk of post-operative infections and osteomyelitis, thus, minimize the need for follow-up care and increase patient comfort. However, the improved drug release of the drug-loaded bone cements is usually accompanied by a loss in mechanical strength, which is critical for weight-bearing bone cement. Recently, more attempts have been undertaken to develop techniques to enhance the antibiotic elution as well as preserve the mechanical properties of the bone cements. The present study investigates the potential influence of addition of mesoporous silica nanoparticles (MSN) on the in vitro drug release kinetics of gentamicin (GTMC), along with the mechanical properties of bone cements. Simplex P was formulated with MSN and loaded with GTMC by direct impregnation. Meanwhile, Simplex P with water soluble poragen (xylitol) and high loading of GTMC as well as commercial bone cement CMW Smartset GHV were used as controls. MSN-formulated bone cements are able to increase the drug release of GTMC by 3-fold with a cumulative release of more than 46% as compared with other control groups. Furthermore, a sustained release could be achieved for two months. The loaded nano-sized MSN with uniform pore channels significantly build up an effective nano-network path in the bone cement facilitates the diffusion and extended release of GTMC. Compared with formulations using xylitol and high GTMC loading, incorporation of MSN shows no detrimental effect on biomechanical properties of the bone cements as no significant changes in the mechanical properties as compared with original bone cement. After drug release for two months, the bending modulus of MSN-formulated bone cements is 4.49 ± 0.75 GPa and the compression strength is 92.7 ± 2.1 MPa (similar to the compression strength of Simplex-P: 93.0 ± 1.2 MPa). The unaffected mechanical properties of MSN-formulated bone cements was due to the unchanged microstructures of bone cement, whereby more than 98% of MSN remains in the matrix and supports the bone cement structures. In contrast, the large portions of extra voids can be observed for the formulations using xylitol and high drug loading after the drug release study, thus caused compressive strength below the ASTM F541 and ISO 5833 minimum of 70 MPa. These results demonstrate the potential applicability of MSN-functionalized poly(methyl methacrylate)-based bone cement as a highly efficient, sustained and local drug delivery system with good mechanical properties.

Keywords: antibiotics, biomechanical properties, bone cement, sustained release

Procedia PDF Downloads 236
75 Distribution and Ecological Risk Assessment of Trace Elements in Sediments along the Ganges River Estuary, India

Authors: Priyanka Mondal, Santosh K. Sarkar

Abstract:

The present study investigated the spatiotemporal distribution and ecological risk assessment of trace elements of surface sediments (top 0 - 5 cm; grain size ≤ 0.63 µm) in relevance to sediment quality characteristics along the Ganges River Estuary, India. Sediment samples were collected during ebb tide from intertidal regions covering seven sampling sites of diverse environmental stresses. The elements were analyzed with the help of ICPAES. This positive, mixohaline, macro-tidal estuary has global significance contributing ecological and economic services. Presence of fine-clayey particle (47.03%) enhances the adsorption as well as transportation of trace elements. There is a remarkable inter-metallic variation (mg kg-1 dry weight) in the distribution pattern in the following manner: Al (31801± 15943) > Fe (23337± 7584) > Mn (461±147) > S(381±235) > Zn(54 ±18) > V(43 ±14) > Cr(39 ±15) > As (34±15) > Cu(27 ±11) > Ni (24 ±9) > Se (17 ±8) > Co(11 ±3) > Mo(10 ± 2) > Hg(0.02 ±0.01). An overall trend of enrichment of majority of trace elements was very much pronounced at the site Lot 8, ~ 35km upstream of the estuarine mouth. In contrast, the minimum concentration was recorded at site Gangasagar, mouth of the estuary, with high energy profile. The prevalent variations in trace element distribution are being liable for a set of cumulative factors such as hydrodynamic conditions, sediment dispersion pattern and textural variations as well as non-homogenous input of contaminants from point and non-point sources. In order to gain insight into the trace elements distribution, accumulation, and their pollution status, geoaccumulation index (Igeo) and enrichment factor (EF) were used. The Igeo indicated that surface sediments were moderately polluted with As (0.60) and Mo (1.30) and strongly contaminated with Se (4.0). The EF indicated severe pollution of Se (53.82) and significant pollution of As (4.05) and Mo (6.0) and indicated the influx of As, Mo and Se in sediments from anthropogenic sources (such as industrial and municipal sewage, atmospheric deposition, agricultural run-off, etc.). The significant role of the megacity Calcutta in relevance to the untreated sewage discharge, atmospheric inputs and other anthropogenic activities is worthwhile to mention. The ecological risk for different trace elements was evaluated using sediment quality guidelines, effects range low (ERL), and effect range median (ERM). The concentration of As, Cu and Ni at 100%, 43% and 86% of the sampling sites has exceeded the ERL value while none of the element concentration exceeded ERM. The potential ecological risk index values revealed that As at 14.3% of the sampling sites would pose relatively moderate risk to benthic organisms. The effective role of finer clay particles for trace element distribution was revealed by multivariate analysis. The authors strongly recommend regular monitoring emphasizing on accurate appraisal of the potential risk of trace elements for effective and sustainable management of this estuarine environment.

Keywords: pollution assessment, sediment contamination, sediment quality, trace elements

Procedia PDF Downloads 239
74 Fructose-Aided Cross-Linked Enzyme Aggregates of Laccase: An Insight on Its Chemical and Physical Properties

Authors: Bipasa Dey, Varsha Panwar, Tanmay Dutta

Abstract:

Laccase, a multicopper oxidase (EC 1.10.3.2) have been at the forefront as a superior industrial biocatalyst. They are versatile in terms of bestowing sustainable and ecological catalytic reactions such as polymerisation, xenobiotic degradation and bioremediation of phenolic and non-phenolic compounds. Regardless of the wide biotechnological applications, the critical limiting factors viz. reusability, retrieval, and storage stability still prevail. This can cause an impediment in their applicability. Crosslinked enzyme aggregates (CLEAs) have emerged as a promising technique that rehabilitates these essential facets, albeit at the expense of their enzymatic activity. The carrier free crosslinking method prevails over the carrier-bound immobilisation in conferring high productivity, low production cost owing to the absence of additional carrier and circumvent any non-catalytic ballast which could dilute the volumetric activity. To the best of our knowledge, the ε-amino group of lysyl residue is speculated as the best choice for forming Schiff’s base with glutaraldehyde. Despite being most preferrable, excess glutaraldehyde can bring about disproportionate and undesirable crosslinking within the catalytic site and hence could deliver undesirable catalytic losses. Moreover, the surface distribution of lysine residues in Trametes versicolor laccase is significantly less. Thus, to mitigate the adverse effect of glutaraldehyde in conjunction with scaling down the degradation or catalytic loss of the enzyme, crosslinking with inert substances like gelatine, collagen, Bovine serum albumin (BSA) or excess lysine is practiced. Analogous to these molecules, sugars have been well known as a protein stabiliser. It helps to retain the structural integrity, specifically secondary structure of the protein during aggregation by changing the solvent properties. They are comprehended to avert protein denaturation or enzyme deactivation during precipitation. We prepared crosslinked enzyme aggregates (CLEAs) of laccase from T. versicolor with the aid of sugars. The sugar CLEAs were compared with the classic BSA and glutaraldehyde laccase CLEAs concerning physico-chemical properties. The activity recovery for the fructose CLEAs were found to be ~20% higher than the non-sugar CLEA. Moreover, the 𝐾𝑐𝑎𝑡𝐾𝑚⁄ values of the CLEAs were two and three-fold higher than BSA-CLEA and GACLEA, respectively. The half-life (t1/2) deciphered by sugar-CLEA was higher than the t1/2 of GA-CLEAs and free enzyme, portraying more thermal stability. Besides, it demonstrated extraordinarily high pH stability, which was analogous to BSA-CLEA. The promising attributes of increased storage stability and recyclability (>80%) gives more edge to the sugar-CLEAs over conventional CLEAs of their corresponding free enzyme. Thus, sugar-CLEA prevails in furnishing the rudimentary properties required for a biocatalyst and holds many prospects.

Keywords: cross-linked enzyme aggregates, laccase immobilization, enzyme reusability, enzyme stability

Procedia PDF Downloads 63
73 Separating Landform from Noise in High-Resolution Digital Elevation Models through Scale-Adaptive Window-Based Regression

Authors: Anne M. Denton, Rahul Gomes, David W. Franzen

Abstract:

High-resolution elevation data are becoming increasingly available, but typical approaches for computing topographic features, like slope and curvature, still assume small sliding windows, for example, of size 3x3. That means that the digital elevation model (DEM) has to be resampled to the scale of the landform features that are of interest. Any higher resolution is lost in this resampling. When the topographic features are computed through regression that is performed at the resolution of the original data, the accuracy can be much higher, and the reported result can be adjusted to the length scale that is relevant locally. Slope and variance are calculated for overlapping windows, meaning that one regression result is computed per raster point. The number of window centers per area is the same for the output as for the original DEM. Slope and variance are computed by performing regression on the points in the surrounding window. Such an approach is computationally feasible because of the additive nature of regression parameters and variance. Any doubling of window size in each direction only takes a single pass over the data, corresponding to a logarithmic scaling of the resulting algorithm as a function of the window size. Slope and variance are stored for each aggregation step, allowing the reported slope to be selected to minimize variance. The approach thereby adjusts the effective window size to the landform features that are characteristic to the area within the DEM. Starting with a window size of 2x2, each iteration aggregates 2x2 non-overlapping windows from the previous iteration. Regression results are stored for each iteration, and the slope at minimal variance is reported in the final result. As such, the reported slope is adjusted to the length scale that is characteristic of the landform locally. The length scale itself and the variance at that length scale are also visualized to aid in interpreting the results for slope. The relevant length scale is taken to be half of the window size of the window over which the minimum variance was achieved. The resulting process was evaluated for 1-meter DEM data and for artificial data that was constructed to have defined length scales and added noise. A comparison with ESRI ArcMap was performed and showed the potential of the proposed algorithm. The resolution of the resulting output is much higher and the slope and aspect much less affected by noise. Additionally, the algorithm adjusts to the scale of interest within the region of the image. These benefits are gained without additional computational cost in comparison with resampling the DEM and computing the slope over 3x3 images in ESRI ArcMap for each resolution. In summary, the proposed approach extracts slope and aspect of DEMs at the lengths scales that are characteristic locally. The result is of higher resolution and less affected by noise than existing techniques.

Keywords: high resolution digital elevation models, multi-scale analysis, slope calculation, window-based regression

Procedia PDF Downloads 104
72 Inhibition of Mild Steel Corrosion in Hydrochloric Acid Medium Using an Aromatic Hydrazide Derivative

Authors: Preethi Kumari P., Shetty Prakasha, Rao Suma A.

Abstract:

Mild steel has been widely employed as construction materials for pipe work in the oil and gas production such as down hole tubular, flow lines and transmission pipelines, in chemical and allied industries for handling acids, alkalis and salt solutions due to its excellent mechanical property and low cost. Acid solutions are widely used for removal of undesirable scale and rust in many industrial processes. Among the commercially available acids hydrochloric acid is widely used for pickling, cleaning, de-scaling and acidization of oil process. Mild steel exhibits poor corrosion resistance in presence of hydrochloric acid. The high reactivity of mild steel in presence of hydrochloric acid is due to the soluble nature of ferrous chloride formed and the cementite phase (Fe3C) normally present in the steel is also readily soluble in hydrochloric acid. Pitting attack is also reported to be a major form of corrosion in mild steel in the presence of high concentrations of acids and thereby causing the complete destruction of metal. Hydrogen from acid reacts with the metal surface and makes it brittle and causes cracks, which leads to pitting type of corrosion. The use of chemical inhibitor to minimize the rate of corrosion has been considered to be the first line of defense against corrosion. In spite of long history of corrosion inhibition, a highly efficient and durable inhibitor that can completely protect mild steel in aggressive environment is yet to be realized. It is clear from the literature review that there is ample scope for the development of new organic inhibitors, which can be conveniently synthesized from relatively cheap raw materials and provide good inhibition efficiency with least risk of environmental pollution. The aim of the present work is to evaluate the electrochemical parameters for the corrosion inhibition behavior of an aromatic hydrazide derivative, 4-hydroxy- N '-[(E)-1H-indole-2-ylmethylidene)] benzohydrazide (HIBH) on mild steel in 2M hydrochloric acid using Tafel polarization and electrochemical impedance spectroscopy (EIS) techniques at 30-60 °C. The results showed that inhibition efficiency increased with increase in inhibitor concentration and decreased marginally with increase in temperature. HIBH showed a maximum inhibition efficiency of 95 % at 8×10-4 M concentration at 30 °C. Polarization curves showed that HIBH act as a mixed-type inhibitor. The adsorption of HIBH on mild steel surface obeys the Langmuir adsorption isotherm. The adsorption process of HIBH at the mild steel/hydrochloric acid solution interface followed mixed adsorption with predominantly physisorption at lower temperature and chemisorption at higher temperature. Thermodynamic parameters for the adsorption process and kinetic parameters for the metal dissolution reaction were determined.

Keywords: electrochemical parameters, EIS, mild steel, tafel polarization

Procedia PDF Downloads 310
71 Ibrutinib and the Potential Risk of Cardiac Failure: A Review of Pharmacovigilance Data

Authors: Abdulaziz Alakeel, Roaa Alamri, Abdulrahman Alomair, Mohammed Fouda

Abstract:

Introduction: Ibrutinib is a selective, potent, and irreversible small-molecule inhibitor of Bruton's tyrosine kinase (BTK). It forms a covalent bond with a cysteine residue (CYS-481) at the active site of Btk, leading to inhibition of Btk enzymatic activity. The drug is indicated to treat certain type of cancers such as mantle cell lymphoma (MCL), chronic lymphocytic leukaemia and Waldenström's macroglobulinaemia (WM). Cardiac failure is a condition referred to inability of heart muscle to pump adequate blood to human body organs. There are multiple types of cardiac failure including left and right-sided heart failure, systolic and diastolic heart failures. The aim of this review is to evaluate the risk of cardiac failure associated with the use of ibrutinib and to suggest regulatory recommendations if required. Methodology: Signal Detection team at the National Pharmacovigilance Center (NPC) of Saudi Food and Drug Authority (SFDA) performed a comprehensive signal review using its national database as well as the World Health Organization (WHO) database (VigiBase), to retrieve related information for assessing the causality between cardiac failure and ibrutinib. We used the WHO- Uppsala Monitoring Centre (UMC) criteria as standard for assessing the causality of the reported cases. Results: Case Review: The number of resulted cases for the combined drug/adverse drug reaction are 212 global ICSRs as of July 2020. The reviewers have selected and assessed the causality for the well-documented ICSRs with completeness scores of 0.9 and above (35 ICSRs); the value 1.0 presents the highest score for best-written ICSRs. Among the reviewed cases, more than half of them provides supportive association (four probable and 15 possible cases). Data Mining: The disproportionality of the observed and the expected reporting rate for drug/adverse drug reaction pair is estimated using information component (IC), a tool developed by WHO-UMC to measure the reporting ratio. Positive IC reflects higher statistical association while negative values indicates less statistical association, considering the null value equal to zero. The results of (IC=1.5) revealed a positive statistical association for the drug/ADR combination, which means “Ibrutinib” with “Cardiac Failure” have been observed more than expected when compared to other medications available in WHO database. Conclusion: Health regulators and health care professionals must be aware for the potential risk of cardiac failure associated with ibrutinib and the monitoring of any signs or symptoms in treated patients is essential. The weighted cumulative evidences identified from causality assessment of the reported cases and data mining are sufficient to support a causal association between ibrutinib and cardiac failure.

Keywords: cardiac failure, drug safety, ibrutinib, pharmacovigilance, signal detection

Procedia PDF Downloads 105
70 Spatial Variability of Soil Metal Contamination to Detect Cancer Risk Zones in Coimbatore Region of India

Authors: Aarthi Mariappan, Janani Selvaraj, P. B. Harathi, M. Prashanthi Devi

Abstract:

Anthropogenic modification of the urban environment has largely increased in the recent years in order to sustain the growing human population. Intense industrial activity, permanent and high traffic on the roads, a developed subterranean infrastructure network, land use patterns are just some specific characteristics. Every day, the urban environment is polluted by more or less toxic emissions, organic or metals wastes discharged from specific activities such as industrial, commercial, municipal. When these eventually deposit into the soil, the physical and chemical properties of the surrounding soil is changed, transforming it into a human exposure indicator. Metals are non-degradable and occur cumulative in soil due to regular deposits are a result of permanent human activity. Due to this, metals are a contaminant factor for soil when persistent over a long period of time and a possible danger for inhabitant’s health on prolonged exposure. Metals accumulated in contaminated soil may be transferred to humans directly, by inhaling the dust raised from top soil, or by ingesting, or by dermal contact and indirectly, through plants and animals grown on contaminated soil and used for food. Some metals, like Cu, Mn, Zn, are beneficial for human’s health and represent a danger only if their concentration is above permissible levels, but other metals, like Pb, As, Cd, Hg, are toxic even at trace level causing gastrointestinal and lung cancers. In urban areas, metals can be emitted from a wide variety of sources like industrial, residential, commercial activities. Our study interrogates the spatial distribution of heavy metals in soil in relation to their permissible levels and their association with the health risk to the urban population in Coimbatore, India. Coimbatore region is a high cancer risk zone and case records of gastro intestinal and respiratory cancer patients were collected from hospitals and geocoded in ArcGIS10.1. The data of patients pertaining to the urban limits were retained and checked for their diseases history based on their diagnosis and treatment. A disease map of cancer was prepared to show the disease distribution. It has been observed that in our study area Cr, Pb, As, Fe and Mg exceeded their permissible levels in the soil. Using spatial overlay analysis a relationship between environmental exposure to these potentially toxic elements in soil and cancer distribution in Coimbatore district was established to show areas of cancer risk. Through this, our study throws light on the impact of prolonged exposure to soil contamination in soil in the urban zones, thereby exploring the possibility to detect cancer risk zones and to create awareness among the exposed groups on cancer risk.

Keywords: soil contamination, cancer risk, spatial analysis, India

Procedia PDF Downloads 381
69 Reducing the Computational Cost of a Two-way Coupling CFD-FEA Model via a Multi-scale Approach for Fire Determination

Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Kevin Tinkham, Ella Quigley

Abstract:

Structural integrity for cladding products is a key performance parameter, especially concerning fire performance. Cladding products such as PIR-based sandwich panels are tested rigorously, in line with industrial standards. Physical fire tests are necessary to ensure the customer's safety but can give little information about critical behaviours that can help develop new materials. Numerical modelling is a tool that can help investigate a fire's behaviour further by replicating the fire test. However, fire is an interdisciplinary problem as it is a chemical reaction that behaves fluidly and impacts structural integrity. An analysis using Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) is needed to capture all aspects of a fire performance test. One method is a two-way coupling analysis that imports the updated changes in thermal data, due to the fire's behaviour, to the FEA solver in a series of iterations. In light of our recent work with Tata Steel U.K using a two-way coupling methodology to determine the fire performance, it has been shown that a program called FDS-2-Abaqus can make predictions of a BS 476 -22 furnace test with a degree of accuracy. The test demonstrated the fire performance of Tata Steel U.K Trisomet product, a Polyisocyanurate (PIR) based sandwich panel used for cladding. Previous works demonstrated the limitations of the current version of the program, the main limitation being the computational cost of modelling three Trisomet panels, totalling an area of 9 . The computational cost increases substantially, with the intention to scale up to an LPS 1181-1 test, which includes a total panel surface area of 200 .The FDS-2-Abaqus program is developed further within this paper to overcome this obstacle and better accommodate Tata Steel U.K PIR sandwich panels. The new developments aim to reduce the computational cost and error margin compared to experimental data. One avenue explored is a multi-scale approach in the form of Reduced Order Modeling (ROM). The approach allows the user to include refined details of the sandwich panels, such as the overlapping joints, without a computationally costly mesh size.Comparative studies will be made between the new implementations and the previous study completed using the original FDS-2-ABAQUS program. Validation of the study will come from physical experiments in line with governing body standards such as BS 476 -22 and LPS 1181-1. The physical experimental data includes the panels' gas and surface temperatures and mechanical deformation. Conclusions are drawn, noting the new implementations' impact factors and discussing the reasonability for scaling up further to a whole warehouse.

Keywords: fire testing, numerical coupling, sandwich panels, thermo fluids

Procedia PDF Downloads 47
68 Preliminary Studies on Poloxamer-Based Hydrogels with Oregano Essential Oil as Potential Topical Treatment of Cutaneous Papillomas

Authors: Ana Maria Muț, Georgeta Coneac, Ioana Olariu, Ștefana Avram, Ioana Zinuca Pavel, Ionela Daliana Minda, Lavinia Vlaia, Cristina Adriana Dehelean, Corina Danciu

Abstract:

Oregano essential oil is obtained from different parts of the plant Origanum vulgare (fam. Lamiaceae) and carvacrol and thymol are primary components, widely recognized for their antimicrobial activity, as well as their antiviral and antifungal properties. Poloxamers are triblock copolymers (Pluronic®), formed of three non-ionic blocks with a hydrophobic polyoxypropylene central chain flanked by two polyoxyethylene hydrophilic chains. They are known for their biocompatibility, sensitivity to temperature changes (sol-to-gel transition of aqueous solution with temperature increase), but also for their amphiphilic and surface active nature determining the formation of micelles, useful for solubilization of different hydrophobic compounds such as the terpenes and terpenoids contained in essential oils. Thus, these polymers, listed in European and US Pharmacopoeia and approved by FDA, are widely used as solubilizers and gelling agents for various pharmaceutical preparations, including topical hydrogels. The aim of this study was to investigate the posibility of solubilizing oregano essential oil (OEO) in polymeric micelles using polyoxypropylene (PPO)-polyoxyethylene (PEO)-polyoxypropylene (PPO) triblock polymers to obtain semisolid systems suitable for topical application. A formulation screening was performed, using Pluronic® F-127 in concentration of 20%, Pluronic® L-31, Pluronic® L-61 and Pluronic® L-62 in concentration of 0.5%, 0.8% respectively 1% to obtain the polymeric micelles-based systems. Then, to each selected system, with or without 10% absolute ethanol, 5% or 8% OEO was added. The obtained transparent poloxamer-based hydrogels containing solubilized OEO were further evaluated for pH, rheological characteristics (flow behaviour, viscosity, consistency and spreadability), using consacrated techniques like potentiometric titration, stationary shear flow test, penetrometric method and parallel plate method. Also, in vitro release and permeation of carvacrol from the hydrogels was carried out, using vertical diffusion cells and synthetic hydrophilic membrane and porcine skin respectively. The pH values and rheological features of all tested formulations were in accordance with official requirements for semisolid cutaneous preparations. But, the formulation containing 0.8% Pluronic® L-31, 10% absolute ethanol, 8% OEO and water and the formulation with 1% Pluronic® L-31, 5% OEO and water, produced the highest cumulative amounts of carvacrol released/permeated through the membrane. The present study demonstrated that oregano essential oil can be successfully solubilized in the investigated poloxamer-based hydrogels. These systems can be further investigated as potential topical therapy for cutaneous papillomas. Funding: This research was funded by Project PN-III-P1-1.1-TE2019-0130, Contract number TE47, Romania.

Keywords: oregano essential oil, carvacrol, poloxamer, topical hydrogels

Procedia PDF Downloads 89
67 Low-Temperature Poly-Si Nanowire Junctionless Thin Film Transistors with Nickel Silicide

Authors: Yu-Hsien Lin, Yu-Ru Lin, Yung-Chun Wu

Abstract:

This work demonstrates the ultra-thin poly-Si (polycrystalline Silicon) nanowire junctionless thin film transistors (NWs JL-TFT) with nickel silicide contact. For nickel silicide film, this work designs to use two-step annealing to form ultra-thin, uniform and low sheet resistance (Rs) Ni silicide film. The NWs JL-TFT with nickel silicide contact exhibits the good electrical properties, including high driving current (>10⁷ Å), subthreshold slope (186 mV/dec.), and low parasitic resistance. In addition, this work also compares the electrical characteristics of NWs JL-TFT with nickel silicide and non-silicide contact. Nickel silicide techniques are widely used for high-performance devices as the device scaling due to the source/drain sheet resistance issue. Therefore, the self-aligned silicide (salicide) technique is presented to reduce the series resistance of the device. Nickel silicide has several advantages including low-temperature process, low silicon consumption, no bridging failure property, smaller mechanical stress, and smaller contact resistance. The junctionless thin-film transistor (JL-TFT) is fabricated simply by heavily doping the channel and source/drain (S/D) regions simultaneously. Owing to the special doping profile, JL-TFT has some advantages such as lower thermal the budget which can integrate with high-k/metal-gate easier than conventional MOSFETs (Metal Oxide Semiconductor Field-Effect Transistors), longer effective channel length than conventional MOSFETs, and avoidance of complicated source/drain engineering. To solve JL-TFT has turn-off problem, JL-TFT needs ultra-thin body (UTB) structure to reach fully depleted channel region in off-state. On the other hand, the drive current (Iᴅ) is declined as transistor features are scaled. Therefore, this work demonstrates ultra thin poly-Si nanowire junctionless thin film transistors with nickel silicide contact. This work investigates the low-temperature formation of nickel silicide layer by physical-chemical deposition (PVD) of a 15nm Ni layer on the poly-Si substrate. Notably, this work designs to use two-step annealing to form ultrathin, uniform and low sheet resistance (Rs) Ni silicide film. The first step was promoted Ni diffusion through a thin interfacial amorphous layer. Then, the unreacted metal was lifted off after the first step. The second step was annealing for lower sheet resistance and firmly merged the phase.The ultra-thin poly-Si nanowire junctionless thin film transistors NWs JL-TFT with nickel silicide contact is demonstrated, which reveals high driving current (>10⁷ Å), subthreshold slope (186 mV/dec.), and low parasitic resistance. In silicide film analysis, the second step of annealing was applied to form lower sheet resistance and firmly merge the phase silicide film. In short, the NWs JL-TFT with nickel silicide contact has exhibited a competitive short-channel behavior and improved drive current.

Keywords: poly-Si, nanowire, junctionless, thin-film transistors, nickel silicide

Procedia PDF Downloads 216
66 Evaluation of Mixing and Oxygen Transfer Performances for a Stirred Bioreactor Containing P. chrysogenum Broths

Authors: A. C. Blaga, A. Cârlescu, M. Turnea, A. I. Galaction, D. Caşcaval

Abstract:

The performance of an aerobic stirred bioreactor for fungal fermentation was analyzed on the basis of mixing time and oxygen mass transfer coefficient, by quantifying the influence of some specific geometrical and operational parameters of the bioreactor, as well as the rheological behavior of Penicillium chrysogenum broth (free mycelia and mycelia aggregates). The rheological properties of the fungus broth, controlled by the biomass concentration, its growth rate, and morphology strongly affect the performance of the bioreactor. Experimental data showed that for both morphological structures the accumulation of fungus biomass induces a significant increase of broths viscosity and modifies the rheological behavior. For lower P. chrysogenum concentrations (both morphological conformations), the mixing time initially increases with aeration rate, reaches a maximum value and decreases. This variation can be explained by the formation of small bubbles, due to the presence of solid phase which hinders the bubbles coalescence, the rising velocity of bubbles being reduced by the high apparent viscosity of fungus broths. By biomass accumulation, the variation of mixing time with aeration rate is gradually changed, the continuous reduction of mixing time with air input flow increase being obtained for 33.5 g/l d.w. P. chrysogenum. Owing to the superior apparent viscosity, which reduces considerably the relative contribution of mechanical agitation to the broths mixing, these phenomena are more pronounced for P. chrysogenum free mycelia. Due to the increase of broth apparent viscosity, the biomass accumulation induces two significant effects on oxygen transfer rate: the diminution of turbulence and perturbation of bubbles dispersion - coalescence equilibrium. The increase of P. chrysogenum free mycelia concentration leads to the decrease of kla values. Thus, for the considered variation domain of the main parameters taken into account, namely air superficial velocity from 8.36 10-4 to 5.02 10-3 m/s and specific power input from 100 to 500 W/m3, kla was reduced for 3.7 times for biomass concentration increase from 4 to 36.5 g/l d.w. The broth containing P. crysogenum mycelia aggregates exhibits a particular behavior from the point of view of oxygen transfer. Regardless of bioreactor operating conditions, the increase of biomass concentration leads initially to the increase of oxygen mass transfer rate, the phenomenon that can be explained by the interaction of pellets with bubbles. The results are in relation with the increase of apparent viscosity of broths corresponding to the variation of biomass concentration between the mentioned limits. Thus, the apparent viscosity of the suspension of fungus mycelia aggregates increased for 44.2 times and fungus free mycelia for 63.9 times for CX increase from 4 to 36.5 g/l d.w. By means of the experimental data, some mathematical correlations describing the influences of the considered factors on mixing time and kla have been proposed. The proposed correlations can be used in bioreactor performance evaluation, optimization, and scaling-up.

Keywords: biomass concentration, mixing time, oxygen mass transfer, P. chrysogenum broth, stirred bioreactor

Procedia PDF Downloads 308
65 Analysis of the Relationship between Micro-Regional Human Development and Brazil's Greenhouse Gases Emission

Authors: Geanderson Eduardo Ambrósio, Dênis Antônio Da Cunha, Marcel Viana Pires

Abstract:

Historically, human development has been based on economic gains associated with intensive energy activities, which often are exhaustive in the emission of Greenhouse Gases (GHGs). It requires the establishment of targets for mitigation of GHGs in order to disassociate the human development from emissions and prevent further climate change. Brazil presents itself as one of the most GHGs emitters and it is of critical importance to discuss such reductions in intra-national framework with the objective of distributional equity to explore its full mitigation potential without compromising the development of less developed societies. This research displays some incipient considerations about which Brazil’s micro-regions should reduce, when the reductions should be initiated and what its magnitude should be. We started with the methodological assumption that human development and GHGs emissions arise in the future as their behavior was observed in the past. Furthermore, we assume that once a micro-region became developed, it is able to maintain gains in human development without the need of keep growing GHGs emissions rates. The human development index and the carbon dioxide equivalent emissions (CO2e) were extrapolated to the year 2050, which allowed us to calculate when the micro-regions will become developed and the mass of GHG’s emitted. The results indicate that Brazil must throw 300 GT CO2e in the atmosphere between 2011 and 2050, of which only 50 GT will be issued by micro-regions before it’s develop and 250 GT will be released after development. We also determined national mitigation targets and structured reduction schemes where only the developed micro-regions would be required to reduce. The micro-region of São Paulo, the most developed of the country, should be also the one that reduces emissions at most, emitting, in 2050, 90% less than the value observed in 2010. On the other hand, less developed micro-regions will be responsible for less impactful reductions, i.e. Vale do Ipanema will issue in 2050 only 10% below the value observed in 2010. Such methodological assumption would lead the country to issue, in 2050, 56.5% lower than that observed in 2010, so that the cumulative emissions between 2011 and 2050 would reduce by 130 GT CO2e over the initial projection. The fact of associating the magnitude of the reductions to the level of human development of the micro-regions encourages the adoption of policies that favor both variables as the governmental planner will have to deal with both the increasing demand for higher standards of living and with the increasing magnitude of reducing emissions. However, if economic agents do not act proactively in local and national level, the country is closer to the scenario in which emits more than the one in which mitigates emissions. The research highlighted the importance of considering the heterogeneity in determining individual mitigation targets and also ratified the theoretical and methodological feasibility to allocate larger share of contribution for those who historically emitted more. It is understood that the proposals and discussions presented should be considered in mitigation policy formulation in Brazil regardless of the adopted reduction target.

Keywords: greenhouse gases, human development, mitigation, intensive energy activities

Procedia PDF Downloads 297
64 Methodological Deficiencies in Knowledge Representation Conceptual Theories of Artificial Intelligence

Authors: Nasser Salah Eldin Mohammed Salih Shebka

Abstract:

Current problematic issues in AI fields are mainly due to those of knowledge representation conceptual theories, which in turn reflected on the entire scope of cognitive sciences. Knowledge representation methods and tools are driven from theoretical concepts regarding human scientific perception of the conception, nature, and process of knowledge acquisition, knowledge engineering and knowledge generation. And although, these theoretical conceptions were themselves driven from the study of the human knowledge representation process and related theories; some essential factors were overlooked or underestimated, thus causing critical methodological deficiencies in the conceptual theories of human knowledge and knowledge representation conceptions. The evaluation criteria of human cumulative knowledge from the perspectives of nature and theoretical aspects of knowledge representation conceptions are affected greatly by the very materialistic nature of cognitive sciences. This nature caused what we define as methodological deficiencies in the nature of theoretical aspects of knowledge representation concepts in AI. These methodological deficiencies are not confined to applications of knowledge representation theories throughout AI fields, but also exceeds to cover the scientific nature of cognitive sciences. The methodological deficiencies we investigated in our work are: - The Segregation between cognitive abilities in knowledge driven models.- Insufficiency of the two-value logic used to represent knowledge particularly on machine language level in relation to the problematic issues of semantics and meaning theories. - Deficient consideration of the parameters of (existence) and (time) in the structure of knowledge. The latter requires that we present a more detailed introduction of the manner in which the meanings of Existence and Time are to be considered in the structure of knowledge. This doesn’t imply that it’s easy to apply in structures of knowledge representation systems, but outlining a deficiency caused by the absence of such essential parameters, can be considered as an attempt to redefine knowledge representation conceptual approaches, or if proven impossible; constructs a perspective on the possibility of simulating human cognition on machines. Furthermore, a redirection of the aforementioned expressions is required in order to formulate the exact meaning under discussion. This redirection of meaning alters the role of Existence and time factors to the Frame Work Environment of knowledge structure; and therefore; knowledge representation conceptual theories. Findings of our work indicate the necessity to differentiate between two comparative concepts when addressing the relation between existence and time parameters, and between that of the structure of human knowledge. The topics presented throughout the paper can also be viewed as an evaluation criterion to determine AI’s capability to achieve its ultimate objectives. Ultimately, we argue some of the implications of our findings that suggests that; although scientific progress may have not reached its peak, or that human scientific evolution has reached a point where it’s not possible to discover evolutionary facts about the human Brain and detailed descriptions of how it represents knowledge, but it simply implies that; unless these methodological deficiencies are properly addressed; the future of AI’s qualitative progress remains questionable.

Keywords: cognitive sciences, knowledge representation, ontological reasoning, temporal logic

Procedia PDF Downloads 82
63 Air Pollution on Stroke in Shenzhen, China: A Time-Stratified Case Crossover Study Modified by Meteorological Variables

Authors: Lei Li, Ping Yin, Haneen Khreis

Abstract:

Stroke is the second leading cause of death and a third leading cause of death and disability worldwide in 2019. Given the significant role of environmental factors in stroke development and progression, it is essential to investigate the effect of air pollution on stroke occurrence while considering the modifying effects of meteorological variables. This study aimed to evaluate the association between short-term exposure to air pollution and the incidence of stroke subtypes in Shenzhen, China, and to explore the potential interactions of meteorological factors with air pollutants. The study analyzed data from January 1, 2006, to December 31, 2014, including 88,214 cases of ischemic stroke and 30,433 cases of hemorrhagic stroke among residents of Shenzhen. Using a time-stratified case–crossover design with conditional quasi-Poisson regression, the study estimated the percentage changes in stroke morbidity associated with short-term exposure to nitrogen dioxide (NO₂), sulfur dioxide (SO₂), particulate matter less than 10 mm in aerodynamic diameter (PM10), carbon monoxide (CO), and ozone (O₃). A five-day moving average of air pollution was applied to capture the cumulative effects of air pollution. The estimates were further stratified by sex, age, education level, and season. The additive and multiplicative interaction between air pollutants and meteorologic variables were assessed by the relative excess risk due to interaction (RERI) and adding the interactive term into the main model, respectively. The study found that NO₂ was positively associated with ischemic stroke occurrence throughout the year and in the cold season (November through April), with a stronger effect observed among men. Each 10 μg/m³ increment in the five-day moving average of NO₂ was associated with a 2.38% (95% confidence interval was 1.36% to 3.41%) increase in the risk of ischemic stroke over the whole year and a 3.36% (2.04% to 4.69%) increase in the cold season. The harmful effect of CO on ischemic stroke was observed only in the cold season, with each 1 mg/m³ increment in the five-day moving average of CO increasing the risk by 12.34% (3.85% to 21.51%). There was no statistically significant additive interaction between individual air pollutants and temperature or relative humidity, as demonstrated by the RERI. The interaction term in the model showed a multiplicative antagonistic effect between NO₂ and temperature (p-value=0.0268). For hemorrhagic stroke, no evidence of the effects of any individual air pollutants was found in the whole population. However, the RERI indicated a statistically additive and multiplicative interaction of temperature on the effects of PM10 and O₃ on hemorrhagic stroke onset. Therefore, the insignificant conclusion should be interpreted with caution. The study suggests that environmental NO₂ and CO might increase the morbidity of ischemic stroke, particularly during the cold season. These findings could help inform policy decisions aimed at reducing air pollution levels to prevent stroke and other health conditions. Additionally, the study provides valuable insights into the interaction between air pollution and meteorological variables, which underscores the need for further research into the complex relationship between environmental factors and health.

Keywords: air pollution, meteorological variables, interactive effect, seasonal pattern, stroke

Procedia PDF Downloads 58
62 Using Group Concept Mapping to Identify a Pharmacy-Based Trigger Tool to Detect Adverse Drug Events

Authors: Rodchares Hanrinth, Theerapong Srisil, Peeraya Sriphong, Pawich Paktipat

Abstract:

The trigger tool is the low-cost, low-tech method to detect adverse events through clues called triggers. The Institute for Healthcare Improvement (IHI) has developed the Global Trigger Tool for measuring and preventing adverse events. However, this tool is not specific for detecting adverse drug events. The pharmacy-based trigger tool is needed to detect adverse drug events (ADEs). Group concept mapping is an effective method for conceptualizing various ideas from diverse stakeholders. This technique was used to identify a pharmacy-based trigger to detect adverse drug events (ADEs). The aim of this study was to involve the pharmacists in conceptualizing, developing, and prioritizing a feasible trigger tool to detect adverse drug events in a provincial hospital, the northeastern part of Thailand. The study was conducted during the 6-month period between April 1 and September 30, 2017. Study participants involved 20 pharmacists (17 hospital pharmacists and 3 pharmacy lecturers) engaging in three concept mapping workshops. In this meeting, the concept mapping technique created by Trochim, a highly constructed qualitative group technic for idea generating and sharing, was used to produce and construct participants' views on what triggers were potential to detect ADEs. During the workshops, participants (n = 20) were asked to individually rate the feasibility and potentiality of each trigger and to group them into relevant categories to enable multidimensional scaling and hierarchical cluster analysis. The outputs of analysis included the trigger list, cluster list, point map, point rating map, cluster map, and cluster rating map. The three workshops together resulted in 21 different triggers that were structured in a framework forming 5 clusters: drug allergy, drugs induced diseases, dosage adjustment in renal diseases, potassium concerning, and drug overdose. The first cluster is drug allergy such as the doctor’s orders for dexamethasone injection combined with chlorpheniramine injection. Later, the diagnosis of drug-induced hepatitis in a patient taking anti-tuberculosis drugs is one trigger in the ‘drugs induced diseases’ cluster. Then, for the third cluster, the doctor’s orders for enalapril combined with ibuprofen in a patient with chronic kidney disease is the example of a trigger. The doctor’s orders for digoxin in a patient with hypokalemia is a trigger in a cluster. Finally, the doctor’s orders for naloxone with narcotic overdose was classified as a trigger in a cluster. This study generated triggers that are similar to some of IHI Global trigger tool, especially in the medication module such as drug allergy and drug overdose. However, there are some specific aspects of this tool, including drug-induced diseases, dosage adjustment in renal diseases, and potassium concerning which do not contain in any trigger tools. The pharmacy-based trigger tool is suitable for pharmacists in hospitals to detect potential adverse drug events using clues of triggers.

Keywords: adverse drug events, concept mapping, hospital, pharmacy-based trigger tool

Procedia PDF Downloads 130
61 Profiling of Bacterial Communities Present in Feces, Milk, and Blood of Lactating Cows Using 16S rRNA Metagenomic Sequencing

Authors: Khethiwe Mtshali, Zamantungwa T. H. Khumalo, Stanford Kwenda, Ismail Arshad, Oriel M. M. Thekisoe

Abstract:

Ecologically, the gut, mammary glands and bloodstream consist of distinct microbial communities of commensals, mutualists and pathogens, forming a complex ecosystem of niches. The by-products derived from these body sites i.e. faeces, milk and blood, respectively, have many uses in rural communities where they aid in the facilitation of day-to-day household activities and occasional rituals. Thus, although livestock rearing plays a vital role in the sustenance of the livelihoods of rural communities, it may serve as a potent reservoir of different pathogenic organisms that could have devastating health and economic implications. This study aimed to simultaneously explore the microbial profiles of corresponding faecal, milk and blood samples from lactating cows using 16S rRNA metagenomic sequencing. Bacterial communities were inferred through the Divisive Amplicon Denoising Algorithm 2 (DADA2) pipeline coupled with SILVA database v138. All downstream analyses were performed in R v3.6.1. Alpha-diversity metrics showed significant differences between faeces and blood, faeces and milk, but did not vary significantly between blood and milk (Kruskal-Wallis, P < 0.05). Beta-diversity metrics on Principal Coordinate Analysis (PCoA) and Non-Metric Dimensional Scaling (NMDS) clustered samples by type, suggesting that microbial communities of the studied niches are significantly different (PERMANOVA, P < 0.05). A number of taxa were significantly differentially abundant (DA) between groups based on the Wald test implemented in the DESeq2 package (Padj < 0.01). The majority of the DA taxa were significantly enriched in faeces than in milk and blood, except for the genus Anaplasma, which was significantly enriched in blood and was, in turn, the most abundant taxon overall. A total of 30 phyla, 74 classes, 156 orders, 243 families and 408 genera were obtained from the overall analysis. The most abundant phyla obtained between the three body sites were Firmicutes, Bacteroidota, and Proteobacteria. A total of 58 genus-level taxa were simultaneously detected between the sample groups, while bacterial signatures of at least 8 of these occurred concurrently in corresponding faeces, milk and blood samples from the same group of animals constituting a pool. The important taxa identified in this study could be categorized into four potentially pathogenic clusters: i) arthropod-borne; ii) food-borne and zoonotic; iii) mastitogenic and; iv) metritic and abortigenic. This study provides insight into the microbial composition of bovine faeces, milk, and blood and its extent of overlapping. It further highlights the potential risk of disease occurrence and transmission between the animals and the inhabitants of the sampled rural community, pertaining to their unsanitary practices associated with the use of cattle by-products.

Keywords: microbial profiling, 16S rRNA, NGS, feces, milk, blood, lactating cows, small-scale farmers

Procedia PDF Downloads 85
60 Social Factors That Contribute to Promoting and Supporting Resilience in Children and Youth following Environmental Disasters: A Mixed Methods Approach

Authors: Caroline McDonald-Harker, Julie Drolet

Abstract:

Abstract— In the last six years Canada In the last six years Canada has experienced two major and catastrophic environmental disasters– the 2013 Southern Alberta flood and the 2016 Fort McMurray, Alberta wildfire. These two disasters resulted in damages exceeding 12 billion dollars, the costliest disasters in Canadian history. In the aftermath of these disasters, many families faced the loss of homes, places of employment, schools, recreational facilities, and also experienced social, emotional, and psychological difficulties. Children and youth are among the most vulnerable to the devastating effects of disasters due to the physical, cognitive, and social factors related to their developmental life stage. Yet children and youth also have the capacity to be resilient and act as powerful catalyst for change in their own lives and wider communities following disaster. Little is known, particularly from a sociological perspective, about the specific factors that contribute to resilience in children and youth, and effective ways to support their overall health and well-being. This paper focuses on the voices and experiences of children and youth residing in these two disaster-affected communities in Alberta, Canada and specifically examines: 1) How children and youth’s lives are impacted by the tragedy, devastation, and upheaval of disaster; 2) Ways that children and youth demonstrate resilience when directly faced with the adversarial circumstances of disaster; and 3) The cumulative internal and external factors that contribute to bolstering and supporting resilience among children and youth post-disaster. This paper discusses the characteristics associated with high levels of resilience in 183 children and youth ages 5 to 17 based on quantitative and qualitative data obtained through a mix methods approach. Child and youth participants were administered the Children and Youth Resilience Measure (CYRM-28) in order to examine factors that influence resilience processes including: individual, caregiver, and context factors. The CYRM-28 was then supplemented with qualitative interviews with children and youth to contextualize the CYRM-28 resiliency factors and provide further insight into their overall disaster experience. Findings reveal that high levels of resilience among child and youth participants is associated with both individual factors and caregiver factors, specifically positive outlook, effective communication, peer support, and physical and psychological caregiving. Individual and caregiver factors helped mitigate the negative effects of disaster, thus bolstering resilience in children and youth. This paper discusses the implications that these findings have for understanding the specific mechanisms that support the resiliency processes and overall recovery of children and youth following disaster; the importance of bridging the gap between children and youth’s needs and the services and supports provided to them post-disaster; and the need to develop resiliency processes and practices that empower children and youth as active agents of change in their own lives following disaster. These findings contribute to furthering knowledge about pragmatic and representative changes to resources, programs, and policies surrounding disaster response, recovery, and mitigation.

Keywords: children and youth, disaster, environment, resilience

Procedia PDF Downloads 98
59 A Multilingual App for Studying Children’s Developing Values: Developing a New Arabic Translation of the Picture-based Values Survey and Comparison of Palestinian and Jewish Children in Israel

Authors: Aysheh Maslamani, Ella Daniel, Anna Dӧring, Iyas Nasser, Ariel Knafo-Noam

Abstract:

Over 250 million people globally speak Arabic, one of the most widespread languages in the world, as their first language. Yet only a minuscule fraction of developmental research studies Middle East children. As values are a core component of culture, understanding how values develop is key to understanding development across cultures. Indeed, with the advent of research on value development, significantly since the introduction of the Picture-Based Value Survey for Children, interest in cross-cultural differences in children's values is increasing. As no measure exists for Arab children, PBVS-C in Arabic developed. The online application version of the PBVS-C that can be administered on a computer, tablet, or even a smartphone to measure the 10 values whose presence has been repeatedly demonstrated across the world. The application has been developed simultaneously in Hebrew and Arabic and can easily be adapted to include additional languages. In this research, the development of the multilingual PBVS-C application version adapted for five-year-olds. The translation process discussed (including important decisions such as which dialect of Arabic, a diglossic language, is most suitable), adaptations to subgroups (e.g., Muslim, Druze and Christian Arab children), and using recorded instructions and value item captions, as well as touchscreens to enhance applicability with young children. Four hundred Palestinian and Israeli 5-12 year old children reported their values using the app (50% in Arabic, 50% in Hebrew). Confirmatory Multidimensional Scaling (MDS) analyses revealed structural patterns that closely correspond to Schwartz's theoretical structure in both languages (e.g., universalism values correlated positively with benevolence and negatively with power, whereas tradition correlated negatively with hedonism and positively with conformity). Replicating past findings, power values showed lower importance than benevolence values in both cultural groups, and there were gender differences in which girls were higher in self-transcendence values and lower in self-enhancement values than boys. Cultural value importance differences were explored and revealed that Palestinian children are significantly higher in tradition and achievement values compared to Israeli children, whereas Israeli children are significantly higher in benevolence, hedonism, self-direction, and stimulation values. Age differences in value coherence across the two groups were also studied. Exploring the cultural differences opens a window to understanding the basic motivations driving populations that were hardly studied before. This study will contribute to the developmental value research since it considers the role of critical variables such as culture and religion and tests value coherence across middle childhood. Findings will be discussed, and the potential and limitations of the computerized PBVS-C concerning future values research.

Keywords: Arab-children, culture, multilingual-application, value-development

Procedia PDF Downloads 62
58 Computational Team Dynamics and Interaction Patterns in New Product Development Teams

Authors: Shankaran Sitarama

Abstract:

New Product Development (NPD) is invariably a team effort and involves effective teamwork. NPD team has members from different disciplines coming together and working through the different phases all the way from conceptual design phase till the production and product roll out. Creativity and Innovation are some of the key factors of successful NPD. Team members going through the different phases of NPD interact and work closely yet challenge each other during the design phases to brainstorm on ideas and later converge to work together. These two traits require the teams to have a divergent and a convergent thinking simultaneously. There needs to be a good balance. The team dynamics invariably result in conflicts among team members. While some amount of conflict (ideational conflict) is desirable in NPD teams to be creative as a group, relational conflicts (or discords among members) could be detrimental to teamwork. Team communication truly reflect these tensions and team dynamics. In this research, team communication (emails) between the members of the NPD teams is considered for analysis. The email communication is processed through a semantic analysis algorithm (LSA) to analyze the content of communication and a semantic similarity analysis to arrive at a social network graph that depicts the communication amongst team members based on the content of communication. The amount of communication (content and not frequency of communication) defines the interaction strength between the members. Social network adjacency matrix is thus obtained for the team. Standard social network analysis techniques based on the Adjacency Matrix (AM) and Dichotomized Adjacency Matrix (DAM) based on network density yield network graphs and network metrics like centrality. The social network graphs are then rendered for visual representation using a Metric Multi-Dimensional Scaling (MMDS) algorithm for node placements and arcs connecting the nodes (representing team members) are drawn. The distance of the nodes in the placement represents the tie-strength between the members. Stronger tie-strengths render nodes closer. Overall visual representation of the social network graph provides a clear picture of the team’s interactions. This research reveals four distinct patterns of team interaction that are clearly identifiable in the visual representation of the social network graph and have a clearly defined computational scheme. The four computational patterns of team interaction defined are Central Member Pattern (CMP), Subgroup and Aloof member Pattern (SAP), Isolate Member Pattern (IMP), and Pendant Member Pattern (PMP). Each of these patterns has a team dynamics implication in terms of the conflict level in the team. For instance, Isolate member pattern, clearly points to a near break-down in communication with the member and hence a possible high conflict level, whereas the subgroup or aloof member pattern points to a non-uniform information flow in the team and some moderate level of conflict. These pattern classifications of teams are then compared and correlated to the real level of conflict in the teams as indicated by the team members through an elaborate self-evaluation, team reflection, feedback form and results show a good correlation.

Keywords: team dynamics, team communication, team interactions, social network analysis, sna, new product development, latent semantic analysis, LSA, NPD teams

Procedia PDF Downloads 45
57 Sand Production Modelled with Darcy Fluid Flow Using Discrete Element Method

Authors: M. N. Nwodo, Y. P. Cheng, N. H. Minh

Abstract:

In the process of recovering oil in weak sandstone formations, the strength of sandstones around the wellbore is weakened due to the increase of effective stress/load from the completion activities around the cavity. The weakened and de-bonded sandstone may be eroded away by the produced fluid, which is termed sand production. It is one of the major trending subjects in the petroleum industry because of its significant negative impacts, as well as some observed positive impacts. For efficient sand management therefore, there has been need for a reliable study tool to understand the mechanism of sanding. One method of studying sand production is the use of the widely recognized Discrete Element Method (DEM), Particle Flow Code (PFC3D) which represents sands as granular individual elements bonded together at contact points. However, there is limited knowledge of the particle-scale behavior of the weak sandstone, and the parameters that affect sanding. This paper aims to investigate the reliability of using PFC3D and a simple Darcy flow in understanding the sand production behavior of a weak sandstone. An isotropic tri-axial test on a weak oil sandstone sample was first simulated at a confining stress of 1MPa to calibrate and validate the parallel bond models of PFC3D using a 10m height and 10m diameter solid cylindrical model. The effect of the confining stress on the number of bonds failure was studied using this cylindrical model. With the calibrated data and sample material properties obtained from the tri-axial test, simulations without and with fluid flow were carried out to check on the effect of Darcy flow on bonds failure using the same model geometry. The fluid flow network comprised of every four particles connected with tetrahedral flow pipes with a central pore or flow domain. Parametric studies included the effects of confining stress, and fluid pressure; as well as validating flow rate – permeability relationship to verify Darcy’s fluid flow law. The effect of model size scaling on sanding was also investigated using 4m height, 2m diameter model. The parallel bond model successfully calibrated the sample’s strength of 4.4MPa, showing a sharp peak strength before strain-softening, similar to the behavior of real cemented sandstones. There seems to be an exponential increasing relationship for the bigger model, but a curvilinear shape for the smaller model. The presence of the Darcy flow induced tensile forces and increased the number of broken bonds. For the parametric studies, flow rate has a linear relationship with permeability at constant pressure head. The higher the fluid flow pressure, the higher the number of broken bonds/sanding. The DEM PFC3D is a promising tool to studying the micromechanical behavior of cemented sandstones.

Keywords: discrete element method, fluid flow, parametric study, sand production/bonds failure

Procedia PDF Downloads 295
56 A Five-Year Experience of Intensity Modulated Radiotherapy in Nasopharyngeal Carcinomas in Tunisia

Authors: Omar Nouri, Wafa Mnejja, Fatma Dhouib, Syrine Zouari, Wicem Siala, Ilhem Charfeddine, Afef Khanfir, Leila Farhat, Nejla Fourati, Jamel Daoud

Abstract:

Purpose and Objective: Intensity modulated radiation (IMRT) technique, associated with induction chemotherapy (IC) and/or concomitant chemotherapy (CC), is actually the recommended treatment modality for nasopharyngeal carcinomas (NPC). The aim of this study was to evaluate the therapeutic results and the patterns of relapse with this treatment protocol. Material and methods: A retrospective monocentric study of 145 patients with NPC treated between June 2016 and July 2021. All patients received IMRT with integrated simultaneous boost (SIB) of 33 daily fractions at a dose of 69.96 Gy for high-risk volume, 60 Gy for intermediate risk volume and 54 Gy for low-risk volume. The high-risk volume dose was 66.5 Gy in children. Survival analysis was performed according to the Kaplan-Meier method, and the Log-rank test was used to compare factors that may influence survival. Results: Median age was 48 years (11-80) with a sex ratio of 2.9. One hundred-twenty tumors (82.7%) were classified as stages III-IV according to the 2017 UICC TNM classification. Ten patients (6.9%) were metastatic at diagnosis. One hundred-thirty-five patient (93.1%) received IC, 104 of which (77%) were TPF-based (taxanes, cisplatin and 5 fluoro-uracil). One hundred-thirty-eight patient (95.2%) received CC, mostly cisplatin in 134 cases (97%). After a median follow-up of 50 months [22-82], 46 patients (31.7%) had a relapse: 12 (8.2%) experienced local and/or regional relapse after a median of 18 months [6-43], 29 (20%) experienced distant relapse after a median of 9 months [2-24] and 5 patients (3.4%) had both. Thirty-five patients (24.1%) died, including 5 (3.4%) from a cause other than their cancer. Three-year overall survival (OS), cancer specific survival, disease free survival, metastasis free survival and loco-regional free survival were respectively 78.1%, 81.3%, 67.8%, 74.5% and 88.1%. Anatomo-clinic factors predicting OS were age > 50 years (88.7 vs. 70.5%; p=0.004), diabetes history (81.2 vs. 66.7%; p=0.027), UICC N classification (100 vs. 95 vs. 77.5 vs. 68.8% respectively for N0, N1, N2 and N3; p=0.008), the practice of a lymph node biopsy (84.2 vs. 57%; p=0.05), and UICC TNM stages III-IV (93.8 vs. 73.6% respectively for stage I-II vs. III-IV; p=0.044). Therapeutic factors predicting OS were a number of CC courses (less than 4 courses: 65.8 vs. 86%; p=0.03, less than 5 courses: 71.5 vs. 89%; p=0.041), a weight loss > 10% during treatment (84.1 vs. 60.9%; p=0.021) and a total cumulative cisplatin dose, including IC and CC, < 380 mg/m² (64.4 vs. 87.6%; p=0.003). Radiotherapy delay and total duration did not significantly affect OS. No grade 3-4 late side effects were noted in the evaluable 127 patients (87.6%). The most common toxicity was dry mouth which was grade 2 in 47 cases (37%) and grade 1 in 55 cases (43.3%).Conclusion: IMRT for nasopharyngeal carcinoma granted a high loco-regional control rate for patients during the last five years. However, distant relapses remain frequent and conditionate the prognosis. We identified many anatomo-clinic and therapeutic prognosis factors. Therefore, high-risk patients require a more aggressive therapeutic approach, such as radiotherapy dose escalation or adding adjuvant chemotherapy.

Keywords: therapeutic results, prognostic factors, intensity-modulated radiotherapy, nasopharyngeal carcinoma

Procedia PDF Downloads 42
55 Clinical Application of Measurement of Eyeball Movement for Diagnose of Autism

Authors: Ippei Torii, Kaoruko Ohtani, Takahito Niwa, Naohiro Ishii

Abstract:

This paper shows developing an objectivity index using the measurement of subtle eyeball movement to diagnose autism. The developmentally disabled assessment varies, and the diagnosis depends on the subjective judgment of professionals. Therefore, a supplementary inspection method that will enable anyone to obtain the same quantitative judgment is needed. The diagnosis are made based on a comparison of the time of gazing an object in the conventional autistic study, but the results do not match. First, we divided the pupil into four parts from the center using measurements of subtle eyeball movement and comparing the number of pixels in the overlapping parts based on an afterimage. Then we developed the objective evaluation indicator to judge non-autistic and autistic people more clearly than conventional methods by analyzing the differences of subtle eyeball movements between the right and left eyes. Even when a person gazes at one point and his/her eyeballs always stay fixed at that point, their eyes perform subtle fixating movements (ie. tremors, drifting, microsaccades) to keep the retinal image clear. Particularly, the microsaccades link with nerves and reflect the mechanism that process the sight in a brain. We converted the differences between these movements into numbers. The process of the conversion is as followed: 1) Select the pixel indicating the subject's pupil from images of captured frames. 2) Set up a reference image, known as an afterimage, from the pixel indicating the subject's pupil. 3) Divide the pupil of the subject into four from the center in the acquired frame image. 4) Select the pixel in each divided part and count the number of the pixels of the overlapping part with the present pixel based on the afterimage. 5) Process the images with precision in 24 - 30fps from a camera and convert the amount of change in the pixels of the subtle movements of the right and left eyeballs in to numbers. The difference in the area of the amount of change occurs by measuring the difference between the afterimage in consecutive frames and the present frame. We set the amount of change to the quantity of the subtle eyeball movements. This method made it possible to detect a change of the eyeball vibration in numerical value. By comparing the numerical value between the right and left eyes, we found that there is a difference in how much they move. We compared the difference in these movements between non-autistc and autistic people and analyzed the result. Our research subjects consists of 8 children and 10 adults with autism, and 6 children and 18 adults with no disability. We measured the values through pasuit movements and fixations. We converted the difference in subtle movements between the right and left eyes into a graph and define it in multidimensional measure. Then we set the identification border with density function of the distribution, cumulative frequency function, and ROC curve. With this, we established an objective index to determine autism, normal, false positive, and false negative.

Keywords: subtle eyeball movement, autism, microsaccade, pursuit eye movements, ROC curve

Procedia PDF Downloads 258
54 Electrical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: electrical disaggregation, DTW, general appliance modeling, event detection

Procedia PDF Downloads 49
53 Construction Port Requirements for Floating Wind Turbines

Authors: Alan Crowle, Philpp Thies

Abstract:

As the floating offshore wind turbine industry continues to develop and grow, the capabilities of established port facilities need to be assessed as to their ability to support the expanding construction and installation requirements. This paper assesses current infrastructure requirements and projected changes to port facilities that may be required to support the floating offshore wind industry. Understanding the infrastructure needs of the floating offshore renewable industry will help to identify the port-related requirements. Floating Offshore Wind Turbines can be installed further out to sea and in deeper waters than traditional fixed offshore wind arrays, meaning that it can take advantage of stronger winds. Separate ports are required for substructure construction, fit-out of the turbines, moorings, subsea cables and maintenance. Large areas are required for the laydown of mooring equipment; inter-array cables, turbine blades and nacelles. The capabilities of established port facilities to support floating wind farms are assessed by evaluation of the size of substructures, the height of wind turbine with regards to the cranes for fitting of blades, distance to offshore site and offshore installation vessel characteristics. The paper will discuss the advantages and disadvantages of using large land-based cranes, inshore floating crane vessels or offshore crane vessels at the fit-out port for the installation of the turbine. Water depths requirements for import of materials and export of the completed structures will be considered. There are additional costs associated with any emerging technology. However part of the popularity of Floating Offshore Wind Turbines stems from the cost savings against permanent structures like fixed wind turbines. Floating Offshore Wind Turbine developers can benefit from lighter, more cost-effective equipment which can be assembled in port and towed to the site rather than relying on large, expensive installation vessels to transport and erect fixed bottom turbines. The ability to assemble Floating Offshore Wind Turbines equipment onshore means minimizing highly weather-dependent operations like offshore heavy lifts and assembly, saving time and costs and reducing safety risks for offshore workers. Maintenance might take place in safer onshore conditions for barges and semi-submersibles. Offshore renewables, such as floating wind, can take advantage of this wealth of experience, while oil and gas operators can deploy this experience at the same time as entering the renewables space The floating offshore wind industry is in the early stages of development and port facilities are required for substructure fabrication, turbine manufacture, turbine construction and maintenance support. The paper discusses the potential floating wind substructures as this provides a snapshot of the requirements at the present time, and potential technological developments required for commercial development. Scaling effects of demonstration-scale projects will be addressed, however, the primary focus will be on commercial-scale (30+ units) device floating wind energy farms.

Keywords: floating wind, port, marine construction, offshore renewables

Procedia PDF Downloads 256