Search results for: thermal stress response
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11793

Search results for: thermal stress response

1353 Synthesis of LiMₓMn₂₋ₓO₄ Doped Co, Ni, Cr and Its Characterization as Lithium Battery Cathode

Authors: Dyah Purwaningsih, Roto Roto, Hari Sutrisno

Abstract:

Manganese dioxide (MnO₂) and its derivatives are among the most widely used materials for the positive electrode in both primary and rechargeable lithium batteries. The MnO₂ derivative compound of LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) is one of the leading candidates for positive electrode materials in lithium batteries as it is abundant, low cost and environmentally friendly. Over the years, synthesis of LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) has been carried out using various methods including sol-gel, gas condensation, spray pyrolysis, and ceramics. Problems with these various methods persist including high cost (so commercially inapplicable) and must be done at high temperature (environmentally unfriendly). This research aims to: (1) synthesize LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) by reflux technique; (2) develop microstructure analysis method from XRD Powder LiMₓMn₂₋ₓO₄ data with the two-stage method; (3) study the electrical conductivity of LiMₓMn₂₋ₓO₄. This research developed the synthesis of LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) with reflux. The materials consisting of Mn(CH₃COOH)₂. 4H₂O and Na₂S₂O₈ were refluxed for 10 hours at 120°C to form β-MnO₂. The doping of Co, Ni and Cr were carried out using solid-state method with LiOH to form LiMₓMn₂₋ₓO₄. The instruments used included XRD, SEM-EDX, XPS, TEM, SAA, TG/DTA, FTIR, LCR meter and eight-channel battery analyzer. Microstructure analysis of LiMₓMn₂₋ₓO₄ was carried out on XRD powder data by two-stage method using FullProf program integrated into WinPlotR and Oscail Program as well as on binding energy data from XPS. The morphology of LiMₓMn₂₋ₓO₄ was studied with SEM-EDX, TEM, and SAA. The thermal stability test was performed with TG/DTA, the electrical conductivity was studied from the LCR meter data. The specific capacity of LiMₓMn₂₋ₓO₄ as lithium battery cathode was tested using an eight-channel battery analyzer. The results showed that the synthesis of LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) was successfully carried out by reflux. The optimal temperature of calcination is 750°C. XRD characterization shows that LiMn₂O₄ has a cubic crystal structure with Fd3m space group. By using the CheckCell in the WinPlotr, the increase of Li/Mn mole ratio does not result in changes in the LiMn₂O₄ crystal structure. The doping of Co, Ni and Cr on LiMₓMn₂₋ₓO₄ (x = 0.02; 0.04; 0; 0.6; 0.08; 0.10) does not change the cubic crystal structure of Fd3m. All the formed crystals are polycrystals with the size of 100-450 nm. Characterization of LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) microstructure by two-stage method shows the shrinkage of lattice parameter and cell volume. Based on its range of capacitance, the conductivity obtained at LiMₓMn₂₋ₓO₄ (M: Co, Ni, Cr) is an ionic conductivity with varying capacitance. The specific battery capacity at a voltage of 4799.7 mV for LiMn₂O₄; Li₁.₀₈Mn₁.₉₂O₄; LiCo₀.₁Mn₁.₉O₄; LiNi₀.₁Mn₁.₉O₄ and LiCr₀.₁Mn₁.₉O₄ are 88.62 mAh/g; 2.73 mAh/g; 89.39 mAh/g; 85.15 mAh/g; and 1.48 mAh/g respectively.

Keywords: LiMₓMn₂₋ₓO₄, solid-state, reflux, two-stage method, ionic conductivity, specific capacity

Procedia PDF Downloads 193
1352 Digital Transformation: Actionable Insights to Optimize the Building Performance

Authors: Jovian Cheung, Thomas Kwok, Victor Wong

Abstract:

Buildings are entwined with smart city developments. Building performance relies heavily on electrical and mechanical (E&M) systems and services accounting for about 40 percent of global energy use. By cohering the advancement of technology as well as energy and operation-efficient initiatives into the buildings, people are enabled to raise building performance and enhance the sustainability of the built environment in their daily lives. Digital transformation in the buildings is the profound development of the city to leverage the changes and opportunities of digital technologies To optimize the building performance, intelligent power quality and energy management system is developed for transforming data into actions. The system is formed by interfacing and integrating legacy metering and internet of things technologies in the building and applying big data techniques. It provides operation and energy profile and actionable insights of a building, which enables to optimize the building performance through raising people awareness on E&M services and energy consumption, predicting the operation of E&M systems, benchmarking the building performance, and prioritizing assets and energy management opportunities. The intelligent power quality and energy management system comprises four elements, namely the Integrated Building Performance Map, Building Performance Dashboard, Power Quality Analysis, and Energy Performance Analysis. It provides predictive operation sequence of E&M systems response to the built environment and building activities. The system collects the live operating conditions of E&M systems over time to identify abnormal system performance, predict failure trends and alert users before anticipating system failure. The actionable insights collected can also be used for system design enhancement in future. This paper will illustrate how intelligent power quality and energy management system provides operation and energy profile to optimize the building performance and actionable insights to revitalize an existing building into a smart building. The system is driving building performance optimization and supporting in developing Hong Kong into a suitable smart city to be admired.

Keywords: intelligent buildings, internet of things technologies, big data analytics, predictive operation and maintenance, building performance

Procedia PDF Downloads 157
1351 The Effects of Stoke's Drag, Electrostatic Force and Charge on Penetration of Nanoparticles through N95 Respirators

Authors: Jacob Schwartz, Maxim Durach, Aniruddha Mitra, Abbas Rashidi, Glen Sage, Atin Adhikari

Abstract:

NIOSH (National Institute for Occupational Safety and Health) approved N95 respirators are commonly used by workers in construction sites where there is a large amount of dust being produced from sawing, grinding, blasting, welding, etc., both electrostatically charged and not. A significant portion of airborne particles in construction sites could be nanoparticles created beside coarse particles. The penetration of the particles through the masks may differ depending on the size and charge of the individual particle. In field experiments relevant to this current study, we found that nanoparticles of medium size ranges are penetrating more frequently than nanoparticles of smaller and larger sizes. For example, penetration percentages of nanoparticles of 11.5 – 27.4 nm into a sealed N95 respirator on a manikin head ranged from 0.59 to 6.59%, whereas nanoparticles of 36.5 – 86.6 nm ranged from 7.34 to 16.04%. The possible causes behind this increased penetration of mid-size nanoparticles through mask filters are not yet explored. The objective of this study is to identify causes behind this unusual behavior of mid-size nanoparticles. We have considered such physical factors as Boltzmann distribution of the particles in thermal equilibrium with the air, kinetic energy of the particles at impact on the mask, Stoke’s drag force, and electrostatic forces in the mask stopping the particles. When the particles collide with the mask, only the particles that have enough kinetic energy to overcome the energy loss due to the electrostatic forces and the Stokes’ drag in the mask can pass through the mask. To understand this process, the following assumptions were made: (1) the effect of Stoke’s drag depends on the particles’ velocity at entry into the mask; (2) the electrostatic force is proportional to the charge on the particles, which in turn is proportional to the surface area of the particles; (3) the general dependence on electrostatic charge and thickness means that for stronger electrostatic resistance in the masks and thicker the masks’ fiber layers the penetration of particles is reduced, which is a sensible conclusion. In sampling situations where one mask was soaked in alcohol eliminating electrostatic interaction the penetration was much larger in the mid-range than the same mask with electrostatic interaction. The smaller nanoparticles showed almost zero penetration most likely because of the small kinetic energy, while the larger sized nanoparticles showed almost negligible penetration most likely due to the interaction of the particle with its own drag force. If there is no electrostatic force the fraction for larger particles grows. But if the electrostatic force is added the fraction for larger particles goes down, so diminished penetration for larger particles should be due to increased electrostatic repulsion, may be due to increased surface area and therefore larger charge on average. We have also explored the effect of ambient temperature on nanoparticle penetrations and determined that the dependence of the penetration of particles on the temperature is weak in the range of temperatures in the measurements 37-42°C, since the factor changes in the range from 3.17 10-3K-1 to 3.22 10-3K-1.

Keywords: respiratory protection, industrial hygiene, aerosol, electrostatic force

Procedia PDF Downloads 194
1350 Measurement of Magnetic Properties of Grainoriented Electrical Steels at Low and High Fields Using a Novel Single

Authors: Nkwachukwu Chukwuchekwa, Joy Ulumma Chukwuchekwa

Abstract:

Magnetic characteristics of grain-oriented electrical steel (GOES) are usually measured at high flux densities suitable for its typical applications in power transformers. There are limited magnetic data at low flux densities which are relevant for the characterization of GOES for applications in metering instrument transformers and low frequency magnetic shielding in magnetic resonance imaging medical scanners. Magnetic properties such as coercivity, B-H loop, AC relative permeability and specific power loss of conventional grain oriented (CGO) and high permeability grain oriented (HGO) electrical steels were measured and compared at high and low flux densities at power magnetising frequency. 40 strips comprising 20 CGO and 20 HGO, 305 mm x 30 mm x 0.27 mm from a supplier were tested. The HGO and CGO strips had average grain sizes of 9 mm and 4 mm respectively. Each strip was singly magnetised under sinusoidal peak flux density from 8.0 mT to 1.5 T at a magnetising frequency of 50 Hz. The novel single sheet tester comprises a personal computer in which LabVIEW version 8.5 from National Instruments (NI) was installed, a NI 4461 data acquisition (DAQ) card, an impedance matching transformer, to match the 600  minimum load impedance of the DAQ card with the 5 to 20  low impedance of the magnetising circuit, and a 4.7 Ω shunt resistor. A double vertical yoke made of GOES which is 290 mm long and 32 mm wide is used. A 500-turn secondary winding, about 80 mm in length, was wound around a plastic former, 270 mm x 40 mm, housing the sample, while a 100-turn primary winding, covering the entire length of the plastic former was wound over the secondary winding. A standard Epstein strip to be tested is placed between the yokes. The magnetising voltage was generated by the LabVIEW program through a voltage output from the DAQ card. The voltage drop across the shunt resistor and the secondary voltage were acquired by the card for calculation of magnetic field strength and flux density respectively. A feedback control system implemented in LabVIEW was used to control the flux density and to make the induced secondary voltage waveforms sinusoidal to have repeatable and comparable measurements. The low noise NI4461 card with 24 bit resolution and a sampling rate of 204.8 KHz and 92 KHz bandwidth were chosen to take the measurements to minimize the influence of thermal noise. In order to reduce environmental noise, the yokes, sample and search coil carrier were placed in a noise shielding chamber. HGO was found to have better magnetic properties at both high and low magnetisation regimes. This is because of the higher grain size of HGO and higher grain-grain misorientation of CGO. HGO is better CGO in both low and high magnetic field applications.

Keywords: flux density, electrical steel, LabVIEW, magnetization

Procedia PDF Downloads 291
1349 The Selectivities of Pharmaceutical Spending Containment: Social Profit, Incentivization Games and State Power

Authors: Ben Main Piotr Ozieranski

Abstract:

State government spending on pharmaceuticals stands at 1 trillion USD globally, promoting criticism of the pharmaceutical industry's monetization of drug efficacy, product cost overvaluation, and health injustice. This paper elucidates the mechanisms behind a state-institutional response to this problem through the sociological lens of the strategic relational approach to state power. To do so, 30 expert interviews, legal and policy documents are drawn on to explain how state elites in New Zealand have successfully contested a 30-year “pharmaceutical spending containment policy”. Proceeding from Jessop's notion of strategic “selectivity”, encompassing analyses of the enabling features of state actors' ability to harness state structures, a theoretical explanation is advanced. First, a strategic context is described that consists of dynamics around pharmaceutical dealmaking between the state bureaucracy, pharmaceutical pricing strategies (and their effects), and the industry. Centrally, the pricing strategy of "bundling" -deals for packages of drugs that combine older and newer patented products- reflect how state managers have instigated an “incentivization game” that is played by state and industry actors, including HTA professionals, over pharmaceutical products (both current and in development). Second, a protective context is described that is comprised of successive legislative-judicial responses to the strategic context and characterized by the regulation and the societalisation of commercial law. Third, within the policy, the achievement of increased pharmaceutical coverage (pharmaceutical “mix”) alongside contained spending is conceptualized as a state defence of a "social profit". As such, in contrast to scholarly expectations that political and economic cultures of neo-liberalism drive pharmaceutical policy-making processes, New Zealand's state elites' approach is shown to be antipathetic to neo-liberals within an overall capitalist economy. The paper contributes an analysis of state pricing strategies and how they are embedded in state regulatory structures. Additionally, through an analysis of the interconnections of state power and pharmaceutical value Abrahams's neo-liberal corporate bias model for pharmaceutical policy analysis is problematised.

Keywords: pharmaceutical governance, pharmaceutical bureaucracy, pricing strategies, state power, value theory

Procedia PDF Downloads 70
1348 Data-Driven Surrogate Models for Damage Prediction of Steel Liquid Storage Tanks under Seismic Hazard

Authors: Laura Micheli, Majd Hijazi, Mahmoud Faytarouni

Abstract:

The damage reported by oil and gas industrial facilities revealed the utmost vulnerability of steel liquid storage tanks to seismic events. The failure of steel storage tanks may yield devastating and long-lasting consequences on built and natural environments, including the release of hazardous substances, uncontrolled fires, and soil contamination with hazardous materials. It is, therefore, fundamental to reliably predict the damage that steel liquid storage tanks will likely experience under future seismic hazard events. The seismic performance of steel liquid storage tanks is usually assessed using vulnerability curves obtained from the numerical simulation of a tank under different hazard scenarios. However, the computational demand of high-fidelity numerical simulation models, such as finite element models, makes the vulnerability assessment of liquid storage tanks time-consuming and often impractical. As a solution, this paper presents a surrogate model-based strategy for predicting seismic-induced damage in steel liquid storage tanks. In the proposed strategy, the surrogate model is leveraged to reduce the computational demand of time-consuming numerical simulations. To create the data set for training the surrogate model, field damage data from past earthquakes reconnaissance surveys and reports are collected. Features representative of steel liquid storage tank characteristics (e.g., diameter, height, liquid level, yielding stress) and seismic excitation parameters (e.g., peak ground acceleration, magnitude) are extracted from the field damage data. The collected data are then utilized to train a surrogate model that maps the relationship between tank characteristics, seismic hazard parameters, and seismic-induced damage via a data-driven surrogate model. Different types of surrogate algorithms, including naïve Bayes, k-nearest neighbors, decision tree, and random forest, are investigated, and results in terms of accuracy are reported. The model that yields the most accurate predictions is employed to predict future damage as a function of tank characteristics and seismic hazard intensity level. Results show that the proposed approach can be used to estimate the extent of damage in steel liquid storage tanks, where the use of data-driven surrogates represents a viable alternative to computationally expensive numerical simulation models.

Keywords: damage prediction , data-driven model, seismic performance, steel liquid storage tanks, surrogate model

Procedia PDF Downloads 143
1347 A Single-Channel BSS-Based Method for Structural Health Monitoring of Civil Infrastructure under Environmental Variations

Authors: Yanjie Zhu, André Jesus, Irwanda Laory

Abstract:

Structural Health Monitoring (SHM), involving data acquisition, data interpretation and decision-making system aim to continuously monitor the structural performance of civil infrastructures under various in-service circumstances. The main value and purpose of SHM is identifying damages through data interpretation system. Research on SHM has been expanded in the last decades and a large volume of data is recorded every day owing to the dramatic development in sensor techniques and certain progress in signal processing techniques. However, efficient and reliable data interpretation for damage detection under environmental variations is still a big challenge. Structural damages might be masked because variations in measured data can be the result of environmental variations. This research reports a novel method based on single-channel Blind Signal Separation (BSS), which extracts environmental effects from measured data directly without any prior knowledge of the structure loading and environmental conditions. Despite the successful application in audio processing and bio-medical research fields, BSS has never been used to detect damage under varying environmental conditions. This proposed method optimizes and combines Ensemble Empirical Mode Decomposition (EEMD), Principal Component Analysis (PCA) and Independent Component Analysis (ICA) together to separate structural responses due to different loading conditions respectively from a single channel input signal. The ICA is applying on dimension-reduced output of EEMD. Numerical simulation of a truss bridge, inspired from New Joban Line Arakawa Railway Bridge, is used to validate this method. All results demonstrate that the single-channel BSS-based method can recover temperature effects from mixed structural response recorded by a single sensor with a convincing accuracy. This will be the foundation of further research on direct damage detection under varying environment.

Keywords: damage detection, ensemble empirical mode decomposition (EEMD), environmental variations, independent component analysis (ICA), principal component analysis (PCA), structural health monitoring (SHM)

Procedia PDF Downloads 304
1346 User Expectations and Opinions Related to Campus Wayfinding and Signage Design: A Case Study of Kastamonu University

Authors: Güllü Yakar, Adnan Tepecik

Abstract:

A university campus resembles an independent city that is spread over a wide area. Campuses that incorporate thousands of new domestic and international users at the beginning of every academic period also host scientific, cultural and sportive events, in addition to embodying regular users such as students and staff. Wayfinding and signage systems are necessary for the regulation of vehicular traffic, and they enable users’ to navigate without losing time or feeling anxiety. While designing the system or testing the functionality of it, opinions of existing users or likely behaviors of typical user profiles (personas) provide designers with insight. The purpose of this study is to identify the wayfinding attitudes and expectations of Kastamonu University Kuzeykent Campus users. This study applies a mixed method in which a questionnaire, developed by the researcher, constitute the quantitative phase of the study. The survey was carried out with 850 participants who filled a questionnaire form which was tested in terms of construct validity by using Exploratory Factor Analysis. While interpreting the data obtained, Chi-Square, T- Test and ANOVA analyses were applied as well as descriptive analyses such as frequency (f) and percentage (%) values. The results of this survey, which was conducted during the absence of systematic wayfinding signs in the campus, reveals the participants expectations for insertion of floor plans and wayfinding signs to indoors, maps to outdoors, symbols and color codes to the existing signs and for the adequate arrangement of those for the use of visually impaired people. The fact that there is a direct proportional relation between the length of institution membership and wayfinding competency within campus, leads to the conclusion that especially the new comers are in need of wayfinding signs. In order to determine the effectiveness of campus-wide wayfinding system implemented after the survey and in order to identify the further expectations of users in this respect, a semi-structured interview form developed by the researcher and assessments of 20 participants are compiled. Subjected to content analysis, this data constitute the qualitative dimension of the study. Research results indicate that despite the presence of the signs, the participants experienced either inability or stress while finding their way, showed tendency to receive help from others and needed outdoor maps and signs, in addition to bigger-sized texts.

Keywords: environmental graphic design, environmental perception, wayfinding and signage design, wayfinding system

Procedia PDF Downloads 237
1345 The Instrumentalization of Digital Media in the Context of Sexualized Violence

Authors: Katharina Kargel, Frederic Vobbe

Abstract:

Sexual online grooming is generally defined as digital interactions for the purpose of sexual exploitation of children or minors, i.e. as a process for preparing and framing sexual child abuse. Due to its conceptual history, sexual online grooming is often associated with perpetrators who are previously unknown to those affected. While the strategies of perpetrators and the perception of those affected are increasingly being investigated, the instrumentalisation of digital media has not yet been researched much. Therefore, the present paper aims at contributing to this research gap by examining in what kind of ways perpetrators instrumentalise digital media. Our analyses draw on 46 case documentations and 18 interviews with those affected. The cases and the partly narrative interviews were collected by ten cooperating specialist centers working on sexualized violence in childhood and youth. For this purpose, we designed a documentation grid allowing for a detailed case reconstruction i.e. including information on the violence, digital media use and those affected. By using Reflexive Grounded Theory, our analyses emphasize a) the subjective benchmark of professional practitioners as well as those affected and b) the interpretative implications resulting from our researchers’ subjective and emotional interaction with the data material. It should first be noted that sexualized online grooming can result in both online and offline sexualized violence as well as hybrid forms. Furthermore, the perpetrators either come from the immediate social environment of those affected or are unknown to them. The perpetrator-victim relationship plays a more important role with regard to the question of the instrumentalisation of digital media than the question of the space (on vs. off) in which the primary violence is committed. Perpetrators unknown to those affected instrumentalise digital media primarily to establish a sexualized system of norms, which is usually embedded in a supposed love relationship. In some cases, after an initial exchange of sexualized images or video recordings, a latent play on the position of power takes place. In the course of the grooming process, perpetrators from the immediate social environment increasingly instrumentalise digital media to establish an explicit relationship of power and dependence, which is directly determined by coercion, threats and blackmail. The knowledge of possible vulnerabilities is strategically used in the course of maintaining contact. The above explanations lead to the conclusion that the motive for the crime plays an essential role in the question of the instrumentalisation of digital media. It is therefore not surprising that it is mostly the near-field perpetrators without commercial motives who initiate a spiral of violence and stress by digitally distributing sexualized (violent) images and video recordings within the reference system of those affected.

Keywords: sexualized violence, children and youth, grooming, offender strategies, digital media

Procedia PDF Downloads 183
1344 Estimation of Dynamic Characteristics of a Middle Rise Steel Reinforced Concrete Building Using Long-Term

Authors: Fumiya Sugino, Naohiro Nakamura, Yuji Miyazu

Abstract:

In earthquake resistant design of buildings, evaluation of vibration characteristics is important. In recent years, due to the increment of super high-rise buildings, the evaluation of response is important for not only the first mode but also higher modes. The knowledge of vibration characteristics in buildings is mostly limited to the first mode and the knowledge of higher modes is still insufficient. In this paper, using earthquake observation records of a SRC building by applying frequency filter to ARX model, characteristics of first and second modes were studied. First, we studied the change of the eigen frequency and the damping ratio during the 3.11 earthquake. The eigen frequency gradually decreases from the time of earthquake occurrence, and it is almost stable after about 150 seconds have passed. At this time, the decreasing rates of the 1st and 2nd eigen frequencies are both about 0.7. Although the damping ratio has more large error than the eigen frequency, both the 1st and 2nd damping ratio are 3 to 5%. Also, there is a strong correlation between the 1st and 2nd eigen frequency, and the regression line is y=3.17x. In the damping ratio, the regression line is y=0.90x. Therefore 1st and 2nd damping ratios are approximately the same degree. Next, we study the eigen frequency and damping ratio from 1998 after 3.11 earthquakes, the final year is 2014. In all the considered earthquakes, they are connected in order of occurrence respectively. The eigen frequency slowly declined from immediately after completion, and tend to stabilize after several years. Although it has declined greatly after the 3.11 earthquake. Both the decresing rate of the 1st and 2nd eigen frequencies until about 7 years later are about 0.8. For the damping ratio, both the 1st and 2nd are about 1 to 6%. After the 3.11 earthquake, the 1st increases by about 1% and the 2nd increases by less than 1%. For the eigen frequency, there is a strong correlation between the 1st and 2nd, and the regression line is y=3.17x. For the damping ratio, the regression line is y=1.01x. Therefore, it can be said that the 1st and 2nd damping ratio is approximately the same degree. Based on the above results, changes in eigen frequency and damping ratio are summarized as follows. In the long-term study of the eigen frequency, both the 1st and 2nd gradually declined from immediately after completion, and tended to stabilize after a few years. Further it declined after the 3.11 earthquake. In addition, there is a strong correlation between the 1st and 2nd, and the declining time and the decreasing rate are the same degree. In the long-term study of the damping ratio, both the 1st and 2nd are about 1 to 6%. After the 3.11 earthquake, the 1st increases by about 1%, the 2nd increases by less than 1%. Also, the 1st and 2nd are approximately the same degree.

Keywords: eigenfrequency, damping ratio, ARX model, earthquake observation records

Procedia PDF Downloads 217
1343 Development of an Innovative Mobile Phone Application for Employment of Persons With Disabilities Toward the Inclusive Society

Authors: Marutani M, Kawajiri H, Usui C, Takai Y, Kawaguchi T

Abstract:

Background: To build the inclusive society, the Japanese government provides “transition support for employment system” for Persons with Disabilities (PWDs). It is, however, difficult to provide appropriate accommodations due to their changeable health conditions. Mobile phone applications (App) are useful to monitor their health conditions and their environments, and effective to improve reasonable accommodations for PWDs. Purpose: This study aimed to develop an App that PWDs input their self-assessment and make their health conditions and environment conditions visible. To attain the goal, we investigated the items of the App for the first step. Methods: Qualitative and descriptive design was used for this study. Study participants were recruited by snowball sampling in July and August 2023. They had to have had minimum of five-years of experience to support PWDs’ employment. Semi-structured interviews were conducted on their assessment regarding PWDs’ conditions of daily activities, their health conditions, and living and working environment. Verbatim transcript was created from each interview content. We extracted the following items in tree groups from each verbatim transcript: daily activities, health conditions, and living and working. Results: Fourteen participants were involved (average years of experience: 10.6 years). Based on the interviews, tree item groups were enriched. The items of daily activities were divided into fifty-five. The example items were as follows: “have meals on one’s style” “feel like slept well” “wake-up time, bedtime, and mealtime are usually fixed.” “commute to the office and work without barriers.” Thirteen items of health conditions were obtained like “feel no anxiety” “relieve stress” “focus on work and training” “have no pain” “have the physical strength to work for one day.” The items of categories of living and working environments were divided into fifteen-two. The example items were as follows: “have no barrier in home” “have supportive family members” “have time to take medication on time while at work” “commute time is just right” “people at the work understand the symptoms” “room temperature and humidity are just right” “get along well with friends in my own way.” The participants also mentioned the styles to input self-assessment like that a face scale would be preferred to number scale. Conclusion: The items were enriched existent paper-based assessment items in terms of living and working environment because those were obtained from the perspective of PWDs. We have to create the app and examine its usefulness with PWDs toward inclusive society.

Keywords: occupational health, innovatiove tool, people with disability, employment

Procedia PDF Downloads 55
1342 Constructivism and Situational Analysis as Background for Researching Complex Phenomena: Example of Inclusion

Authors: Radim Sip, Denisa Denglerova

Abstract:

It’s impossible to capture complex phenomena, such as inclusion, with reductionism. The most common form of reductionism is the objectivist approach, where processes and relationships are reduced to entities and clearly outlined phases, with a consequent search for relationships between them. Constructivism as a paradigm and situational analysis as a methodological research portfolio represent a way to avoid the dominant objectivist approach. They work with a situation, i.e. with the essential blending of actors and their environment. Primary transactions are taking place between actors and their surroundings. Researchers create constructs based on their need to solve a problem. Concepts therefore do not describe reality, but rather a complex of real needs in relation to the available options how such needs can be met. For examination of a complex problem, corresponding methodological tools and overall design of the research are necessary. Using an original research on inclusion in the Czech Republic as an example, this contribution demonstrates that inclusion is not a substance easily described, but rather a relationship field changing its forms in response to its actors’ behaviour and current circumstances. Inclusion consists of dynamic relationship between an ideal, real circumstances and ways to achieve such ideal under the given circumstances. Such achievement has many shapes and thus cannot be captured by description of objects. It can be expressed in relationships in the situation defined by time and space. Situational analysis offers tools to examine such phenomena. It understands a situation as a complex of dynamically changing aspects and prefers relationships and positions in the given situation over a clear and final definition of actors, entities, etc. Situational analysis assumes creation of constructs as a tool for solving a problem at hand. It emphasizes the meanings that arise in the process of coordinating human actions, and the discourses through which these meanings are negotiated. Finally, it offers “cartographic tools” (situational maps, socials worlds / arenas maps, positional maps) that are able to capture the complexity in other than linear-analytical ways. This approach allows for inclusion to be described as a complex of phenomena taking place with a certain historical preference, a complex that can be overlooked if analyzed with a more traditional approach.

Keywords: constructivism, situational analysis, objective realism, reductionism, inclusion

Procedia PDF Downloads 149
1341 Examining the Effects of Increasing Lexical Retrieval Attempts in Tablet-Based Naming Therapy for Aphasia

Authors: Jeanne Gallee, Sofia Vallila-Rohter

Abstract:

Technology-based applications are increasingly being utilized in aphasia rehabilitation as a means of increasing intensity of treatment and improving accessibility to treatment. These interactive therapies, often available on tablets, lead individuals to complete language and cognitive rehabilitation tasks that draw upon skills such as the ability to name items, recognize semantic features, count syllables, rhyme, and categorize objects. Tasks involve visual and auditory stimulus cues and provide feedback about the accuracy of a person’s response. Research has begun to examine the efficacy of tablet-based therapies for aphasia, yet much remains unknown about how individuals interact with these therapy applications. Thus, the current study aims to examine the efficacy of a tablet-based therapy program for anomia, further examining how strategy training might influence the way that individuals with aphasia engage with and benefit from therapy. Individuals with aphasia are enrolled in one of two treatment paradigms: traditional therapy or strategy therapy. For ten weeks, all participants receive 2 hours of weekly in-house therapy using Constant Therapy, a tablet-based therapy application. Participants are provided with iPads and are additionally encouraged to work on therapy tasks for one hour a day at home (home logins). For those enrolled in traditional therapy, in-house sessions involve completing therapy tasks while a clinician researcher is present. For those enrolled in the strategy training group, in-house sessions focus on limiting cue use in order to maximize lexical retrieval attempts and naming opportunities. The strategy paradigm is based on the principle that retrieval attempts may foster long-term naming gains. Data have been collected from 7 participants with aphasia (3 in the traditional therapy group, 4 in the strategy training group). We examine cue use, latency of responses and accuracy through the course of therapy, comparing results across group and setting (in-house sessions vs. home logins).

Keywords: aphasia, speech-language pathology, traumatic brain injury, language

Procedia PDF Downloads 203
1340 Balancing Electricity Demand and Supply to Protect a Company from Load Shedding: A Review

Authors: G. W. Greubel, A. Kalam

Abstract:

This paper provides a review of the technical problems facing the South African electricity system and discusses a hypothetical ‘virtual grid’ concept that may assist in solving the problems. The proposed solution has potential application across emerging markets with constrained power infrastructure or for companies who wish to be entirely powered by renewable energy. South Africa finds itself at a confluence of forces where the national electricity supply system is constrained with under-supply primarily from old and failing coal-fired power stations and congested and inadequate transmission and distribution systems. Simultaneously, the country attempts to meet carbon reduction targets driven by both an alignment with international goals and a consumer-driven requirement. The constrained electricity system is an aspect of an economy characterized by very low economic growth, high unemployment, and frequent and significant load shedding. The fiscus does not have the funding to build new generation capacity or strengthen the grid. The under-supply is increasingly alleviated by the penetration of wind and solar generation capacity and embedded roof-top solar. However, this increased penetration results in less inertia, less synchronous generation, and less capability for fast frequency response, with resultant instability. The renewable energy facilities assist in solving the under-supply issues but merely ‘kick the can down the road’ by not contributing to grid stability or by substituting the lost inertia, thus creating an expanding issue for the grid to manage. By technically balancing its electricity demand and supply a company with facilities located across the country can be protected from the effects of load shedding, and thus ensure financial and production performance, protect jobs, and contribute meaningfully to the economy. By treating the company’s load (across the country) and its various distributed generation facilities as a ‘virtual grid’, which by design will provide ancillary services to the grid one is able to create a win-win situation for both the company and the grid.

Keywords: load shedding, renewable energy integration, smart grid, virtual grid, virtual power plant

Procedia PDF Downloads 59
1339 Calibration of Residential Buildings Energy Simulations Using Real Data from an Extensive in situ Sensor Network – A Study of Energy Performance Gap

Authors: Mathieu Bourdeau, Philippe Basset, Julien Waeytens, Elyes Nefzaoui

Abstract:

As residential buildings account for a third of the overall energy consumption and greenhouse gas emissions in Europe, building energy modeling is an essential tool to reach energy efficiency goals. In the energy modeling process, calibration is a mandatory step to obtain accurate and reliable energy simulations. Nevertheless, the comparison between simulation results and the actual building energy behavior often highlights a significant performance gap. The literature discusses different origins of energy performance gaps, from building design to building operation. Then, building operation description in energy models, especially energy usages and users’ behavior, plays an important role in the reliability of simulations but is also the most accessible target for post-occupancy energy management and optimization. Therefore, the present study aims to discuss results on the calibration ofresidential building energy models using real operation data. Data are collected through a sensor network of more than 180 sensors and advanced energy meters deployed in three collective residential buildings undergoing major retrofit actions. The sensor network is implemented at building scale and in an eight-apartment sample. Data are collected for over one year and half and coverbuilding energy behavior – thermal and electricity, indoor environment, inhabitants’ comfort, occupancy, occupants behavior and energy uses, and local weather. Building energy simulations are performed using a physics-based building energy modeling software (Pleaides software), where the buildings’features are implemented according to the buildingsthermal regulation code compliance study and the retrofit project technical files. Sensitivity analyses are performed to highlight the most energy-driving building features regarding each end-use. These features are then compared with the collected post-occupancy data. Energy-driving features are progressively replaced with field data for a step-by-step calibration of the energy model. Results of this study provide an analysis of energy performance gap on an existing residential case study under deep retrofit actions. It highlights the impact of the different building features on the energy behavior and the performance gap in this context, such as temperature setpoints, indoor occupancy, the building envelopeproperties but also domestic hot water usage or heat gains from electric appliances. The benefits of inputting field data from an extensive instrumentation campaign instead of standardized scenarios are also described. Finally, the exhaustive instrumentation solution provides useful insights on the needs, advantages, and shortcomings of the implemented sensor network for its replicability on a larger scale and for different use cases.

Keywords: calibration, building energy modeling, performance gap, sensor network

Procedia PDF Downloads 159
1338 Genetic Data of Deceased People: Solving the Gordian Knot

Authors: Inigo de Miguel Beriain

Abstract:

Genetic data of deceased persons are of great interest for both biomedical research and clinical use. This is due to several reasons. On the one hand, many of our diseases have a genetic component; on the other hand, we share genes with a good part of our biological family. Therefore, it would be possible to improve our response considerably to these pathologies if we could use these data. Unfortunately, at the present moment, the status of data on the deceased is far from being satisfactorily resolved by the EU data protection regulation. Indeed, the General Data Protection Regulation has explicitly excluded these data from the category of personal data. This decision has given rise to a fragmented legal framework on this issue. Consequently, each EU member state offers very different solutions. For instance, Denmark considers the data as personal data of the deceased person for a set period of time while some others, such as Spain, do not consider this data as such, but have introduced some specifically focused regulations on this type of data and their access by relatives. This is an extremely dysfunctional scenario from multiple angles, not least of which is scientific cooperation at the EU level. This contribution attempts to outline a solution to this dilemma through an alternative proposal. Its main hypothesis is that, in reality, health data are, in a sense, a rara avis within data in general because they do not refer to one person but to several. Hence, it is possible to think that all of them can be considered data subjects (although not all of them can exercise the corresponding rights in the same way). When the person from whom the data were obtained dies, the data remain as personal data of his or her biological relatives. Hence, the general regime provided for in the GDPR may apply to them. As these are personal data, we could go back to thinking in terms of a general prohibition of data processing, with the exceptions provided for in Article 9.2 and on the legal bases included in Article 6. This may be complicated in practice, given that, since we are dealing with data that refer to several data subjects, it may be complex to refer to some of these bases, such as consent. Furthermore, there are theoretical arguments that may oppose this hypothesis. In this contribution, it is shown, however, that none of these objections is of sufficient substance to delegitimize the argument exposed. Therefore, the conclusion of this contribution is that we can indeed build a general framework on the processing of personal data of deceased persons in the context of the GDPR. This would constitute a considerable improvement over the current regulatory framework, although it is true that some clarifications will be necessary for its practical application.

Keywords: collective data conceptual issues, data from deceased people, genetic data protection issues, GDPR and deceased people

Procedia PDF Downloads 154
1337 Empirical Analysis of the Effect of Cloud Movement in a Basic Off-Grid Photovoltaic System: Case Study Using Transient Response of DC-DC Converters

Authors: Asowata Osamede, Christo Pienaar, Johan Bekker

Abstract:

Mismatch in electrical energy (power) or outage from commercial providers, in general, does not promote development to the public and private sector, these basically limit the development of industries. The necessity for a well-structured photovoltaic (PV) system is of importance for an efficient and cost-effective monitoring system. The major renewable energy potential on earth is provided from solar radiation and solar photovoltaics (PV) are considered a promising technological solution to support the global transformation to a low-carbon economy and reduction on the dependence on fossil fuels. Solar arrays which consist of various PV module should be operated at the maximum power point in order to reduce the overall cost of the system. So power regulation and conditioning circuits should be incorporated in the set-up of a PV system. Power regulation circuits used in PV systems include maximum power point trackers, DC-DC converters and solar chargers. Inappropriate choice of power conditioning device in a basic off-grid PV system can attribute to power loss, hence the need for a right choice of power conditioning device to be coupled with the system of the essence. This paper presents the design and implementation of a power conditioning devices in order to improve the overall yield from the availability of solar energy and the system’s total efficiency. The power conditioning devices taken into consideration in the project includes the Buck and Boost DC-DC converters as well as solar chargers with MPPT. A logging interface circuit (LIC) is designed and employed into the system. The LIC is designed on a printed circuit board. It basically has DC current signalling sensors, specifically the LTS 6-NP. The LIC is consequently required to program the voltages in the system (these include the PV voltage and the power conditioning device voltage). The voltage is structured in such a way that it can be accommodated by the data logger. Preliminary results which include availability of power as well as power loss in the system and efficiency will be presented and this would be used to draw the final conclusion.

Keywords: tilt and orientation angles, solar chargers, PV panels, storage devices, direct solar radiation

Procedia PDF Downloads 135
1336 Numerical Investigation on Transient Heat Conduction through Brine-Spongy Ice

Authors: S. R. Dehghani, Y. S. Muzychka, G. F. Naterer

Abstract:

The ice accretion of salt water on cold substrates creates brine-spongy ice. This type of ice is a mixture of pure ice and liquid brine. A real case of creation of this type of ice is superstructure icing which occurs on marine vessels and offshore structures in cold and harsh conditions. Transient heat transfer through this medium causes phase changes between brine pockets and pure ice. Salt rejection during the process of transient heat conduction increases the salinity of brine pockets to reach a local equilibrium state. In this process the only effect of passing heat through the medium is not changing the sensible heat of the ice and brine pockets; latent heat plays an important role and affects the mechanism of heat transfer. In this study, a new analytical model for evaluating heat transfer through brine-spongy ice is suggested. This model considers heat transfer and partial solidification and melting together. Properties of brine-spongy ice are obtained using properties of liquid brine and pure ice. A numerical solution using Method of Lines discretizes the medium to reach a set of ordinary differential equations. Boundary conditions are chosen using one of the applicable cases of this type of ice; one side is considered as a thermally isolated surface, and the other side is assumed to be suddenly affected by a constant temperature boundary. All cases are evaluated in temperatures between -20 C and the freezing point of brine-spongy ice. Solutions are conducted using different salinities from 5 to 60 ppt. Time steps and space intervals are chosen properly to maintain the most stable and fast solution. Variation of temperature, volume fraction of brine and brine salinity versus time are the most important outputs of this study. Results show that transient heat conduction through brine-spongy ice can create a various range of salinity of brine pockets from the initial salinity to that of 180 ppt. The rate of variation of temperature is found to be slower for high salinity cases. The maximum rate of heat transfer occurs at the start of the simulation. This rate decreases as time passes. Brine pockets are smaller at portions closer to the colder side than that of the warmer side. A the start of the solution, the numerical solution tends to increase instabilities. This is because of sharp variation of temperature at the start of the process. Changing the intervals improves the unstable situation. The analytical model using a numerical scheme is capable of predicting thermal behavior of brine spongy ice. This model and numerical solutions are important for modeling the process of freezing of salt water and ice accretion on cold structures.

Keywords: method of lines, brine-spongy ice, heat conduction, salt water

Procedia PDF Downloads 217
1335 Comparison of Serum Protein Fraction between Healthy and Diarrhea Calf by Electrophoretogram

Authors: Jinhee Kang, Kwangman Park, Ruhee Song, Suhee Kim, Do-Hyeon Yu, Kyoungseong Choi, Jinho Park

Abstract:

Statement of the Problem: Animal blood components maintain homeostasis when animals are healthy, and changes in chemical composition of the blood and body fluids can be observed if animals have a disease. In particular, newborn calves are susceptible to disease and therefore hematologic tests and serum chemistry tests could become an important guideline to the diagnosis and the treatment of diseases. Diarrhea in newborn calves is the most damaging to cattle ranch, whether dairy or cattle fattening, and is a large part of calf atrophy and death. However, since the study on calf electrophoresis was not carried out, a survey analysis was conducted on it. Methodology and Theoretical Orientation: The calves were divided into healthy calves and disease (diarrhea) calves, and calves were classified by 1-14d, 15-28d, and more than 28d, respectively. The fecal state was classified by solid (0-value), semi-solid (1-value), loose (2-value) and watery (3-value). In the solid (0-value) and semi-solid (1-value) feces valuable pathogen was not detected, but loose (2-value) and watery (3-value) feces were detected. Findings: ALB, α-1, α-2, α-SUM, β and γ (Gamma) were examined by electrophoresis analysis of healthy calves and diarrhea calves. Test results showed that there were age differences between healthy calves and diarrheic calves. When we look at the γ-globulin at 1-14 days of age, we can see that the average calf of healthy calves is 16.8% and the average of diarrheal calves is 7.7%, when we look at the figures for the α-2 at 1-14 days, we found that healthy calves average 5.2% and diarrheal calves 8.7% higher than healthy cows. On α-1, 15-28 days, and after 28 days, healthy calves average 10.4% and diarrheal calves average 7.5% diarrhea calves were 12.6% and 12.4% higher than healthy calves. In the α-SUM, the healthy calves were 21.6%, 16.8%, and 14.5%, respectively, after 1-14 days, 15-28 days and 28 days. diarrheal calves were 23.1%, 19.5%, and 19.8%. Conclusion and Significance: In this study, we examined the electrophoresis results of healthy calves and diseased (diarrhea) calves, gamma globulin at 1-14 days of age were lower than those of healthy calves (diarrhea), indicating that the calf was unable to consume colostrum from the mother when it was a new calf. α-1, α-2, α-SUM may be associated with an acute inflammatory response as a result of increased levels of calves with diarrhea (diarrhea). Further research is needed to investigate the effects of acute inflammatory responses on additional calf-forming proteins. Information on the results of the electrophoresis test will be provided where necessary according to the item.

Keywords: alpha, electrophoretogram, serum protein, γ, gamma

Procedia PDF Downloads 140
1334 Elasto-Plastic Analysis of Structures Using Adaptive Gaussian Springs Based Applied Element Method

Authors: Mai Abdul Latif, Yuntian Feng

Abstract:

Applied Element Method (AEM) is a method that was developed to aid in the analysis of the collapse of structures. Current available methods cannot deal with structural collapse accurately; however, AEM can simulate the behavior of a structure from an initial state of no loading until collapse of the structure. The elements in AEM are connected with sets of normal and shear springs along the edges of the elements, that represent the stresses and strains of the element in that region. The elements are rigid, and the material properties are introduced through the spring stiffness. Nonlinear dynamic analysis has been widely modelled using the finite element method for analysis of progressive collapse of structures; however, difficulties in the analysis were found at the presence of excessively deformed elements with cracking or crushing, as well as having a high computational cost, and difficulties on choosing the appropriate material models for analysis. The Applied Element method is developed and coded to significantly improve the accuracy and also reduce the computational costs of the method. The scheme works for both linear elastic, and nonlinear cases, including elasto-plastic materials. This paper will focus on elastic and elasto-plastic material behaviour, where the number of springs required for an accurate analysis is tested. A steel cantilever beam is used as the structural element for the analysis. The first modification of the method is based on the Gaussian Quadrature to distribute the springs. Usually, the springs are equally distributed along the face of the element, but it was found that using Gaussian springs, only up to 2 springs were required for perfectly elastic cases, while with equal springs at least 5 springs were required. The method runs on a Newton-Raphson iteration scheme, and quadratic convergence was obtained. The second modification is based on adapting the number of springs required depending on the elasticity of the material. After the first Newton Raphson iteration, Von Mises stress conditions were used to calculate the stresses in the springs, and the springs are classified as elastic or plastic. Then transition springs, springs located exactly between the elastic and plastic region, are interpolated between regions to strictly identify the elastic and plastic regions in the cross section. Since a rectangular cross-section was analyzed, there were two plastic regions (top and bottom), and one elastic region (middle). The results of the present study show that elasto-plastic cases require only 2 springs for the elastic region, and 2 springs for the plastic region. This showed to improve the computational cost, reducing the minimum number of springs in elasto-plastic cases to only 6 springs. All the work is done using MATLAB and the results will be compared to models of structural elements using the finite element method in ANSYS.

Keywords: applied element method, elasto-plastic, Gaussian springs, nonlinear

Procedia PDF Downloads 225
1333 Combined Effect of Gender Differences and Fatiguing Task on Unipedal Postural Balance and Functional Mobility in Adults with Multiple Sclerosis

Authors: Sonda Jallouli, Omar Hammouda, Imen Ben Dhia, Salma Sakka, Chokri Mhiri, Mohamed Habib Elleuch, Abedlmoneem Yahia, Sameh Ghroubi

Abstract:

Multiple sclerosis (MS) is characterized by gender differences with affecting women two to four times more than men, but the disease progression is faster and more severe in men. Fatigue represents one of the most frequent and disabling symptoms related to MS. Results of previous studies regarding gender differences in fatigue perception in MS persons are contradictory. Besides, fatigue has been shown to affect negatively postural balance and functional mobility in MS persons. However, no study has taken into account gender differences in the response of these physical parameters to a fatiguing protocol in MS persons. Given the reduction of autonomy due to the alteration of these parameters induced by fatigue and the importance of gender differences in postural balance training programs in fatigued men and women with MS, the aim of this study was to investigate the effect of gender difference on unipedal postural balance and functional mobility after performing a fatiguing task in MS adults. Methods: Eleven women (30.29 ± 7.99 years) and seven men (30.91 ± 8.19 years) with relapsing-remitting MS performed a fatiguing protocol: three sets of the 5×sit to stand test (5-STST), six-minute walk test (6MWT) followed by three sets of the 5-STST. Unipedal balance, functional mobility, and fatigue perception were measured prefatigue (T0) and post fatigue (T3) using a clinical unipedal balance test, timed up and go test (TUGT), and analogic visual scale of fatigue (VASF), respectively. Heart rate (HR) and rate of perceived exertion (RPE) were recorded before, during and after the fatiguing task. Results: Compared to women, men showed an impairment of unipedal balance on the dominant leg (p<0.001, d=0.52) and mobility (p<0.001, d=3) via reducing unipedal stance time and increasing duration of TUGT execution, respectively. No gender differences were observed in 6MWT, 5-STST, HR, RPE and VASF scores. Conclusion: Fatiguing protocol negatively affected unipedal postural balance and mobility only in men. These gender differences were inconclusive but can be taken into account in postural balance rehabilitation programs for persons with MS.

Keywords: functional mobility, fatiguing exercises, multiple sclerosis, sex differences, unipedal balance

Procedia PDF Downloads 138
1332 Role of Education on Shaping the Personality of the Students in Rural Areas: A Case Study of Daund Taluka in Pune District of Maharashtra, India

Authors: L. K. Shitole

Abstract:

Usually on the face of it, personality is regarded as the external appearance of an individual. In psychology, the personality is not viewed merely as self or external appears, but it adds much more. Human resources development encompasses the personality development of the students. The student’s development starts right from the childhood and gradually continues right up to the completion of education in professional courses. This paper attempts to find out the role of the educational institutions in shaping the personality of the students from the rural area. Schools and colleges have infrastructural limitations, obtaining good quality and devoted teaching staff poses problems and even outside the school environment there are no private classes which may take care of this deficiency. The researcher has used the standardized test namely “Vyaktitva Shodhika” developed by Gyan Prabodhini, Pune for the students in Daund Taluka. There are 68 objective types of questions in the said questionnaire. Totally a sample size of 4191 students was selected. The sample was quite representative. It is observed that by and large the response indicates that the educational institutions are taking sincere efforts in shaping the personality of the students. In the semi-urban area i.e. at educational institutions of all levels, the performance on this front is excellent and at rest of Daund Taluka there is scope for improvement. Educational institutions of all levels are showing excellent performance in ensuring availability of the requisite infrastructure conducive for the development of the personality of the students. In rest of Daund Taluka there is ample scope for improving the situation. As far as data relating to role of co-curricular activities and sports programs in mental and physical development at various educational institutions is concerned Daund educational institutions have repeated their performance in securing “A” category, while in the rural area of Daund Taluka, there is need to step up the efforts in this regard. In today’s world of knowledge industry, one cannot ignore the importance of education and thereby the personality growth of the students. Accordingly, the educational institutions should undertake consistent research and extension activities in the area of personality development.

Keywords: personality, attitude, infrastructure, quality of education, learning environment, teacher’s contribution, family and society’s role

Procedia PDF Downloads 466
1331 Lead Chalcogenide Quantum Dots for Use in Radiation Detectors

Authors: Tom Nakotte, Hongmei Luo

Abstract:

Lead chalcogenide-based (PbS, PbSe, and PbTe) quantum dots (QDs) were synthesized for the purpose of implementing them in radiation detectors. Pb based materials have long been of interest for gamma and x-ray detection due to its high absorption cross section and Z number. The emphasis of the studies was on exploring how to control charge carrier transport within thin films containing the QDs. The properties of QDs itself can be altered by changing the size, shape, composition, and surface chemistry of the dots, while the properties of carrier transport within QD films are affected by post-deposition treatment of the films. The QDs were synthesized using colloidal synthesis methods and films were grown using multiple film coating techniques, such as spin coating and doctor blading. Current QD radiation detectors are based on the QD acting as fluorophores in a scintillation detector. Here the viability of using QDs in solid-state radiation detectors, for which the incident detectable radiation causes a direct electronic response within the QD film is explored. Achieving high sensitivity and accurate energy quantification in QD radiation detectors requires a large carrier mobility and diffusion lengths in the QD films. Pb chalcogenides-based QDs were synthesized with both traditional oleic acid ligands as well as more weakly binding oleylamine ligands, allowing for in-solution ligand exchange making the deposition of thick films in a single step possible. The PbS and PbSe QDs showed better air stability than PbTe. After precipitation the QDs passivated with the shorter ligand are dispersed in 2,6-difloupyridine resulting in colloidal solutions with concentrations anywhere from 10-100 mg/mL for film processing applications, More concentrated colloidal solutions produce thicker films during spin-coating, while an extremely concentrated solution (100 mg/mL) can be used to produce several micrometer thick films using doctor blading. Film thicknesses of micrometer or even millimeters are needed for radiation detector for high-energy gamma rays, which are of interest for astrophysics or nuclear security, in order to provide sufficient stopping power.

Keywords: colloidal synthesis, lead chalcogenide, radiation detectors, quantum dots

Procedia PDF Downloads 127
1330 Role of the Midwifery Trained Registered Nurse in Postnatal Units at Tertiary Care Hospitals in the Western Province of Sri Lanka: A Postal Survey

Authors: Sunethra Jayathilake, Vathsala Jayasuriya-Illesinghe, Kerstin Samarasinghe, Himani Molligoda, Rasika Perera

Abstract:

In Sri Lanka, postnatal care in the state hospitals is provided by different professional categories: Midwifery trained registered nurses (MTRNs), Registered Nurses (RNs) who do not have midwifery training, doctors and midwives. Even though four professional categories provide postnatal care to mothers and newborn babies, they are not aware of their own tasks and responsibilities in postnatal care. Particularly MTRN’s role in the postnatal unit is unclear. The current study aimed to identify nurses’ (both MTRN and RNs) perception on MTRN’s tasks and responsibilities in postnatal care. This is a descriptive cross sectional study using postal survey. All nurses who were currently working in postnatal units at five selected tertiary care hospitals in the Western Province at that time were invited to participate in the study. Accordingly, the pre evaluated self-administered questionnaire was sent to 201 nurses (53 MTRNs and 148 RNs) in the study setting. The number of valid return questionnaire was 166; response rate was 83%. Respondents rated the responsibility of four professional categories: MTRN, RN, doctor and midwife whether they are 'primarily responsible', 'responsible in absence' and 'not responsible', for each of 15 postnatal (PN) tasks which were previously identified from focus group discussions with care providers during the first phase of the study. Data were analyzed using SPSS version 20; descriptive statistics were calculated. Out of the 15 PN tasks, 13 were identified as MTRNs’ primary responsibilities by 71%-93% of respondents. The respondents also considered six (6) tasks out of 15 as primary responsibility of both MTRN and RN, seven (7) tasks as primary responsibility of MTRN, RN and doctor and the remaining two (2) tasks were identified as the primary responsibility of MTRN, RN and midwife. All 15 PN tasks overlapped with other professional categories. Overlapping tasks may create role confusion leading to conflicts among professional categories which affect the quality of care they provide, eventually, threaten the safety of the client. It is recommended that an official job description for each care provider is needed to recognize their own professional boundaries for ensuring safe, quality care delivery in Sri Lanka.

Keywords: overlapping, postnatal, responsibilities, tasks

Procedia PDF Downloads 150
1329 Prevalence and Correlates of Complementary and Alternative Medicine Use among Diabetic Patients in Lebanon: A Cross-Sectional Study

Authors: Farah Naja, Mohamad Alameddine

Abstract:

Background: The difficulty of compliance to therapeutic and lifestyle management of type 2 diabetes mellitus (T2DM) encourages patients to use complementary and alternative medicine (CAM) therapies. Little is known about the prevalence and mode of CAM use among diabetics in the Eastern Mediterranean Region in general and Lebanon in particular. Objective: To assess the prevalence and modes of CAM use among patients with T2DM residing in Beirut, Lebanon. Methods: A cross-sectional survey of T2DM patients was conducted on patients recruited from two major referral centers - a public hospital and a private academic medical center in Beirut. In a face-to-face interview, participants completed a survey questionnaire comprised of three sections: socio-demographic, diabetes characteristics and types and modes of CAM use. Descriptive statistics, univariate and multivariate logistic regression analyses were utilized to assess the prevalence, mode and correlates of CAM use in the study population. The main outcome in this study (CAM use) was defined as using CAM at least once since diagnosis with T2DM. Results: A total of 333 T2DM patients completed the survey (response rate: 94.6%). Prevalence of CAM use in the study population was 38%, 95% CI (33.1-43.5). After adjustment, CAM use was significantly associated with a “married” status, a longer duration of T2DM, the presence of disease complications, and a positive family history of the disease. Folk foods and herbs were the most commonly used CAM followed by natural health products. One in five patients used CAM as an alternative to conventional treatment. Only 7 % of CAM users disclosed the CAM use to their treating physician. Health care practitioners were the least cited (7%) as influencing the choice of CAM among users. Conclusion: The use of CAM therapies among T2DM patients in Lebanon is prevalent. Decision makers and care providers must fully understand the potential risks and benefits of CAM therapies to appropriately advise their patients. Attention must be dedicated to educating T2DM patients on the importance of disclosing CAM use to their physicians especially patients with a family history of diabetes, and those using conventional therapy for a long time.

Keywords: nutritional supplements, type 2 diabetes mellitus, complementary and alternative medicine (CAM), conventional therapy

Procedia PDF Downloads 349
1328 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide

Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva

Abstract:

Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.

Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning

Procedia PDF Downloads 160
1327 The Efficacy of Thymbra spicata Ethanolic Extract and its Main Component Carvacrol on In vitro Model of Metabolically-Associated Dysfunctions

Authors: Farah Diab, Mohamad Khalil, Francesca Storace, Francesca Baldini, Piero Portincasaa, Giulio Lupidi, Laura Vergani

Abstract:

Thymbra spicata is a thyme-like plant belonging to the Lamiaceae family that shows a global distribution, especially in the eastern Mediterranean region. Leaves of T. spicata contain large amounts of phenols such as phenolic acids (rosmarinic acid), phenolic monoterpenes (carvacrol), and flavonoids. In Lebanon, T. spicata is currently used as a culinary herb in salad and infusion, as well as for traditional medicinal purposes. Carvacrol (5-isopropyl-2-methyl phenol), the most abundant polyphenol in the organic extract and essential oils, has a great array of pharmacological properties. In fact, carvacrol is largely employed as a food additive and neutraceutical agent. Our aim is to investigate the beneficial effects of T. spicata ethanolic extract (TE) and its main component, carvacrol, using in vitro models of hepatic steatosis and endothelial dysfunction. As a further point, we focused on investigating if and how the binding of carvacrol to albumin, the physiological transporter for drugs in the blood, might be altered by the presence of high levels of fatty acids (FAs), thus impairing the carvacrol bio-distribution in vivo. For that reason, hepatic FaO cells treated with exogenous FAs such as oleate and palmitate mimic hepatosteatosis; endothelial HECV cells exposed to hydrogen peroxide are a model of endothelial dysfunction. In these models, we measured lipid accumulation, free radical production, lipoperoxidation, and nitric oxide release before and after treatment with carvacrol. The carvacrol binding to albumin with/without high levels of long-chain FAs was assessed by absorption and emission spectroscopies. Our findings show that both TE and carvacrol (i) counteracted lipid accumulation in hepatocytes by decreasing the intracellular and extracellular lipid contents in steatotic FaO cells; (ii) decreased oxidative stress in endothelial cells by significantly reducing lipoperoxidation and free radical production, as well as, attenuating the nitric oxide release; (ii) high levels of circulating FAs reduced the binding of carvacrol to albumin. The beneficial effects of TE and carvacrol on both hepatic and endothelial cells point to a nutraceutical potential. However, high levels of circulating FAs, such as those occurring in metabolic disorders, might hinder the carvacrol transport, bio-distribution, and pharmacodynamics.

Keywords: carvacrol, endothelial dysfunction, fatty acids, non-alcoholic fatty liver diseases, serum albumin

Procedia PDF Downloads 192
1326 Nanofluidic Cell for Resolution Improvement of Liquid Transmission Electron Microscopy

Authors: Deybith Venegas-Rojas, Sercan Keskin, Svenja Riekeberg, Sana Azim, Stephanie Manz, R. J. Dwayne Miller, Hoc Khiem Trieu

Abstract:

Liquid Transmission Electron Microscopy (TEM) is a growing area with a broad range of applications from physics and chemistry to material engineering and biology, in which it is possible to image in-situ unseen phenomena. For this, a nanofluidic device is used to insert the nanoflow with the sample inside the microscope in order to keep the liquid encapsulated because of the high vacuum. In the last years, Si3N4 windows have been widely used because of its mechanical stability and low imaging contrast. Nevertheless, the pressure difference between the inside fluid and the outside vacuum in the TEM generates bulging in the windows. This increases the imaged fluid volume, which decreases the signal to noise ratio (SNR), limiting the achievable spatial resolution. With the proposed device, the membrane is fortified with a microstructure capable of stand higher pressure differences, and almost removing completely the bulging. A theoretical study is presented with Finite Element Method (FEM) simulations which provide a deep understanding of the membrane mechanical conditions and proves the effectiveness of this novel concept. Bulging and von Mises Stress were studied for different membrane dimensions, geometries, materials, and thicknesses. The microfabrication of the device was made with a thin wafer coated with thin layers of SiO2 and Si3N4. After the lithography process, these layers were etched (reactive ion etching and buffered oxide etch (BOE) respectively). After that, the microstructure was etched (deep reactive ion etching). Then the back side SiO2 was etched (BOE) and the array of free-standing micro-windows was obtained. Additionally, a Pyrex wafer was patterned with windows, and inlets/outlets, and bonded (anodic bonding) to the Si side to facilitate the thin wafer handling. Later, a thin spacer is sputtered and patterned with microchannels and trenches to guide the nanoflow with the samples. This approach reduces considerably the common bulging problem of the window, improving the SNR, contrast and spatial resolution, increasing substantially the mechanical stability of the windows, allowing a larger viewing area. These developments lead to a wider range of applications of liquid TEM, expanding the spectrum of possible experiments in the field.

Keywords: liquid cell, liquid transmission electron microscopy, nanofluidics, nanofluidic cell, thin films

Procedia PDF Downloads 255
1325 Risk Factors Affecting Construction Project Cost in Oman

Authors: Omar Amoudi, Latifa Al Brashdi

Abstract:

Construction projects are always subject to risks and uncertainties due to its unique and dynamic nature, outdoor work environment, the wide range of skills employed, various parties involved in addition to situation of construction business environment at large. Altogether, these risks and uncertainties affect projects objectives and lead to cost overruns, delay, and poor quality. Construction projects in Oman often experience cost overruns and delay. Managing these risks and reducing their impacts on construction cost requires firstly identifying these risks, and then analyzing their severity on project cost to obtain deep understanding about these risks. This in turn will assist construction managers in managing and tacking these risks. This paper aims to investigate the main risk factors that affect construction projects cost in the Sultanate of Oman. In order to achieve the main aim, literature review was carried out to identify the main risk factors affecting construction cost. Thirty-three risk factors were identified from the literature. Then, a questionnaire survey was designed and distributed among construction professionals (i.e., client, contractor and consultant) to obtain their opinion toward the probability of occurrence for each risk factor and its possible impact on construction project cost. The collected data was analyzed based on qualitative aspects and in several ways. The severity of each risk factor was obtained by multiplying the probability occurrence of a risk factor with its impact. The findings of this study reveal that the most significant risk factors that have high severity impact on construction project cost are: Change of Oil Price, Delay of Materials and Equipment Delivery, Changes in Laws and Regulations, Improper Budgeting, and Contingencies, Lack of Skilled Workforce and Personnel, Delays Caused by Contractor, Delays of Owner Payments, Delays Caused by Client, and Funding Risk. The results can be used as a basis for construction managers to make informed decisions and produce risk response procedures and strategies to tackle these risks and reduce their negative impacts on construction project cost.

Keywords: construction cost, construction projects, Oman, risk factors, risk management

Procedia PDF Downloads 345
1324 Arginase Enzyme Activity in Human Serum as a Marker of Cognitive Function: The Role of Inositol in Combination with Arginine Silicate

Authors: Katie Emerson, Sara Perez-Ojalvo, Jim Komorowski, Danielle Greenberg

Abstract:

The purpose of this study was to evaluate arginase activity levels in response to combinations of an inositol-stabilized arginine silicate (ASI; Nitrosigine®), L-arginine, and Inositol. Arginine acts as a vasodilator that promotes increased blood flow resulting in enhanced delivery of oxygen and nutrients to the brain and other tissues. ASI alone has been shown to improve performance on cognitive tasks. Arginase, found in human serum, catalyzes the conversion of arginine to ornithine and urea, completing the last step in the urea cycle. Decreasing arginase levels maintains arginine and results in increased nitric oxide production. This study aimed to determine the most effective combination of ASI, L-arginine and inositol for minimizing arginase levels and therefore maximize ASI’s effect on cognition. Serum was taken from untreated healthy donors by separation from clotted factors. Arginase activity of serum in the presence or absence of test products was determined (QuantiChrom™, DARG-100, Bioassay Systems, Hayward CA). The remaining ultra-filtrated serum units were harvested and used as the source for the arginase enzyme. ASI alone or combined with varied levels of Inositol were tested as follows: ASI + inositol at 0.25 g, 0.5 g, 0.75 g, or 1.00 g. L-arginine was also tested as a positive control. All tests elicited changes in arginase activity demonstrating the efficacy of the method used. Adding L-arginine to serum from untreated subjects, with or without inositol only had a mild effect. Adding inositol at all levels reduced arginase activity. Adding 0.5 g to the standardized amount of ASI led to the lowest amount of arginase activity as compared to the 0.25g 0.75g or 1.00g doses of inositol or to L-arginine alone. The outcome of this study demonstrates an interaction of the pairing of inositol with ASI on the activity of the enzyme arginase. We found that neither the maximum nor minimum amount of inositol tested in this study led to maximal arginase inhibition. Since the inhibition of arginase activity is desirable for product formulations looking to maintain arginine levels, the most effective amount of inositol was deemed preferred. Subsequent studies suggest this moderate level of inositol in combination with ASI leads to cognitive improvements including reaction time, executive function, and concentration.

Keywords: arginine, inositol, arginase, cognitive benefits

Procedia PDF Downloads 112