Search results for: phasor measurement unit
165 Superparamagnetic Sensor with Lateral Flow Immunoassays as Platforms for Biomarker Quantification
Authors: M. Salvador, J. C. Martinez-Garcia, A. Moyano, M. C. Blanco-Lopez, M. Rivas
Abstract:
Biosensors play a crucial role in the detection of molecules nowadays due to their advantages of user-friendliness, high selectivity, the analysis in real time and in-situ applications. Among them, Lateral Flow Immunoassays (LFIAs) are presented among technologies for point-of-care bioassays with outstanding characteristics such as affordability, portability and low-cost. They have been widely used for the detection of a vast range of biomarkers, which do not only include proteins but also nucleic acids and even whole cells. Although the LFIA has traditionally been a positive/negative test, tremendous efforts are being done to add to the method the quantifying capability based on the combination of suitable labels and a proper sensor. One of the most successful approaches involves the use of magnetic sensors for detection of magnetic labels. Bringing together the required characteristics mentioned before, our research group has developed a biosensor to detect biomolecules. Superparamagnetic nanoparticles (SPNPs) together with LFIAs play the fundamental roles. SPMNPs are detected by their interaction with a high-frequency current flowing on a printed micro track. By means of the instant and proportional variation of the impedance of this track provoked by the presence of the SPNPs, quantitative and rapid measurement of the number of particles can be obtained. This way of detection requires no external magnetic field application, which reduces the device complexity. On the other hand, the major limitations of LFIAs are that they are only qualitative or semiquantitative when traditional gold or latex nanoparticles are used as color labels. Moreover, the necessity of always-constant ambient conditions to get reproducible results, the exclusive detection of the nanoparticles on the surface of the membrane, and the short durability of the signal are drawbacks that can be advantageously overcome with the design of magnetically labeled LFIAs. The approach followed was to coat the SPIONs with a specific monoclonal antibody which targets the protein under consideration by chemical bonds. Then, a sandwich-type immunoassay was prepared by printing onto the nitrocellulose membrane strip a second antibody against a different epitope of the protein (test line) and an IgG antibody (control line). When the sample flows along the strip, the SPION-labeled proteins are immobilized at the test line, which provides magnetic signal as described before. Preliminary results using this practical combination for the detection and quantification of the Prostatic-Specific Antigen (PSA) shows the validity and consistency of the technique in the clinical range, where a PSA level of 4.0 ng/mL is the established upper normal limit. Moreover, a LOD of 0.25 ng/mL was calculated with a confident level of 3 according to the IUPAC Gold Book definition. Its versatility has also been proved with the detection of other biomolecules such as troponin I (cardiac injury biomarker) or histamine.Keywords: biosensor, lateral flow immunoassays, point-of-care devices, superparamagnetic nanoparticles
Procedia PDF Downloads 232164 Design and Implementation of a Hardened Cryptographic Coprocessor with 128-bit RISC-V Core
Authors: Yashas Bedre Raghavendra, Pim Vullers
Abstract:
This study presents the design and implementation of an abstract cryptographic coprocessor, leveraging AMBA(Advanced Microcontroller Bus Architecture) protocols - APB (Advanced Peripheral Bus) and AHB (Advanced High-performance Bus), to enable seamless integration with the main CPU(Central processing unit) and enhance the coprocessor’s algorithm flexibility. The primary objective is to create a versatile coprocessor that can execute various cryptographic algorithms, including ECC(Elliptic-curve cryptography), RSA(Rivest–Shamir–Adleman), and AES (Advanced Encryption Standard) while providing a robust and secure solution for modern secure embedded systems. To achieve this goal, the coprocessor is equipped with a tightly coupled memory (TCM) for rapid data access during cryptographic operations. The TCM is placed within the coprocessor, ensuring quick retrieval of critical data and optimizing overall performance. Additionally, the program memory is positioned outside the coprocessor, allowing for easy updates and reconfiguration, which enhances adaptability to future algorithm implementations. Direct links are employed instead of DMA(Direct memory access) for data transfer, ensuring faster communication and reducing complexity. The AMBA-based communication architecture facilitates seamless interaction between the coprocessor and the main CPU, streamlining data flow and ensuring efficient utilization of system resources. The abstract nature of the coprocessor allows for easy integration of new cryptographic algorithms in the future. As the security landscape continues to evolve, the coprocessor can adapt and incorporate emerging algorithms, making it a future-proof solution for cryptographic processing. Furthermore, this study explores the addition of custom instructions into RISC-V ISE (Instruction Set Extension) to enhance cryptographic operations. By incorporating custom instructions specifically tailored for cryptographic algorithms, the coprocessor achieves higher efficiency and reduced cycles per instruction (CPI) compared to traditional instruction sets. The adoption of RISC-V 128-bit architecture significantly reduces the total number of instructions required for complex cryptographic tasks, leading to faster execution times and improved overall performance. Comparisons are made with 32-bit and 64-bit architectures, highlighting the advantages of the 128-bit architecture in terms of reduced instruction count and CPI. In conclusion, the abstract cryptographic coprocessor presented in this study offers significant advantages in terms of algorithm flexibility, security, and integration with the main CPU. By leveraging AMBA protocols and employing direct links for data transfer, the coprocessor achieves high-performance cryptographic operations without compromising system efficiency. With its TCM and external program memory, the coprocessor is capable of securely executing a wide range of cryptographic algorithms. This versatility and adaptability, coupled with the benefits of custom instructions and the 128-bit architecture, make it an invaluable asset for secure embedded systems, meeting the demands of modern cryptographic applications.Keywords: abstract cryptographic coprocessor, AMBA protocols, ECC, RSA, AES, tightly coupled memory, secure embedded systems, RISC-V ISE, custom instructions, instruction count, cycles per instruction
Procedia PDF Downloads 70163 Electrochemical Activity of NiCo-GDC Cermet Anode for Solid Oxide Fuel Cells Operated in Methane
Authors: Kamolvara Sirisuksakulchai, Soamwadee Chaianansutcharit, Kazunori Sato
Abstract:
Solid Oxide Fuel Cells (SOFCs) have been considered as one of the most efficient large unit power generators for household and industrial applications. The efficiency of an electronic cell depends mainly on the electrochemical reactions in the anode. The development of anode materials has been intensely studied to achieve higher kinetic rates of redox reactions and lower internal resistance. Recent studies have introduced an efficient cermet (ceramic-metallic) material for its ability in fuel oxidation and oxide conduction. This could expand the reactive site, also known as the triple-phase boundary (TPB), thus increasing the overall performance. In this study, a bimetallic catalyst Ni₀.₇₅Co₀.₂₅Oₓ was combined with Gd₀.₁Ce₀.₉O₁.₉₅ (GDC) to be used as a cermet anode (NiCo-GDC) for an anode-supported type SOFC. The synthesis of Ni₀.₇₅Co₀.₂₅Oₓ was carried out by ball milling NiO and Co3O4 powders in ethanol and calcined at 1000 °C. The Gd₀.₁Ce₀.₉O₁.₉₅ was prepared by a urea co-precipitation method. Precursors of Gd(NO₃)₃·6H₂O and Ce(NO₃)₃·6H₂O were dissolved in distilled water with the addition of urea and were heated subsequently. The heated mixture product was filtered and rinsed thoroughly, then dried and calcined at 800 °C and 1500 °C, respectively. The two powders were combined followed by pelletization and sintering at 1100 °C to form an anode support layer. The fabrications of an electrolyte layer and cathode layer were conducted. The electrochemical performance in H₂ was measured from 800 °C to 600 °C while for CH₄ was from 750 °C to 600 °C. The maximum power density at 750 °C in H₂ was 13% higher than in CH₄. The difference in performance was due to higher polarization resistances confirmed by the impedance spectra. According to the standard enthalpy, the dissociation energy of C-H bonds in CH₄ is slightly higher than the H-H bond H₂. The dissociation of CH₄ could be the cause of resistance within the anode material. The results from lower temperatures showed a descending trend of power density in relevance to the increased polarization resistance. This was due to lowering conductivity when the temperature decreases. The long-term stability was measured at 750 °C in CH₄ monitoring at 12-hour intervals. The maximum power density tends to increase gradually with time while the resistances were maintained. This suggests the enhanced stability from charge transfer activities in doped ceria due to the transition of Ce⁴⁺ ↔ Ce³⁺ at low oxygen partial pressure and high-temperature atmosphere. However, the power density started to drop after 60 h, and the cell potential also dropped from 0.3249 V to 0.2850 V. These phenomena was confirmed by a shifted impedance spectra indicating a higher ohmic resistance. The observation by FESEM and EDX-mapping suggests the degradation due to mass transport of ions in the electrolyte while the anode microstructure was still maintained. In summary, the electrochemical test and stability test for 60 h was achieved by NiCo-GDC cermet anode. Coke deposition was not detected after operation in CH₄, hence this confirms the superior properties of the bimetallic cermet anode over typical Ni-GDC.Keywords: bimetallic catalyst, ceria-based SOFCs, methane oxidation, solid oxide fuel cell
Procedia PDF Downloads 154162 Electrochemical Properties of Li-Ion Batteries Anode Material: Li₃.₈Cu₀.₁Ni₀.₁Ti₅O₁₂
Authors: D. Olszewska, J. Niewiedzial
Abstract:
In some types of Li-ion batteries carbon in the form of graphite is used. Unfortunately, carbon materials, in particular graphite, have very good electrochemical properties, but increase their volume during charge/discharge cycles, which may even lead to an explosion of the cell. The cell element may be replaced by a composite material consisting of lithium-titanium oxide Li4Ti5O12 (LTO) modified with copper and nickel ions and carbon derived from sucrose. This way you can improve the conductivity of the material. LTO is appropriate only for applications which do not require high energy density because of its high operating voltage (ca. 1.5 V vs. Li/Li+). Specific capacity of Li4Ti5O12 is high enough for utilization in Li-ion batteries (theoretical capacity 175 mAh·g-1) but it is lower than capacity of graphite anodes. Materials based on Li4Ti5O12 do not change their volume during charging/discharging cycles, however, LTO has low conductivity. Another positive aspect of the use of sucrose in the carbon composite material is to eliminate the addition of carbon black from the anode of the battery. Therefore, the proposed materials contribute significantly to environmental protection and safety of selected lithium cells. New anode materials in order to obtain Li3.8Cu0.1Ni0.1Ti5O12 have been prepared by solid state synthesis using three-way: i) stoichiometric composition of Li2CO3, TiO2, CuO, NiO (A- Li3.8Cu0.1Ni0.1Ti5O12); ii) stoichiometric composition of Li2CO3, TiO2, Cu(NO3)2, Ni(NO3)2 (B-Li3.8Cu0.1Ni0.1Ti5O12); and iii) stoichiometric composition of Li2CO3, TiO2, CuO, NiO calcined with 10% of saccharose (Li3.8Cu0.1Ni0.1Ti5O12-C). Structure of materials was studied by X-ray diffraction (XRD). The electrochemical properties were performed using appropriately prepared cell Li|Li+|Li3.8Cu0.1Ni0.1Ti5O12 for cyclic voltammetry and discharge/charge measurements. The cells were periodically charged and discharged in the voltage range from 1.3 to 2.0 V applying constant charge/discharge current in order to determine the specific capacity of each electrode. Measurements at various values of the charge/discharge current (from C/10 to 5C) were carried out. Cyclic voltammetry investigation was carried out by applying to the cells a voltage linearly changing over time at a rate of 0.1 mV·s-1 (in the range from 2.0 to 1.3 V and from 1.3 to 2.0 V). The XRD method analyzes show that composite powders were obtained containing, in addition to the main phase, 4.78% and 4% TiO2 in A-Li3.8Cu0.1Ni0.1O12 and B-Li3.8Cu0.1Ni0.1O12, respectively. However, Li3.8Cu0.1Ni0.1O12-C material is three-phase: 63.84% of the main phase, 17.49 TiO2 and 18.67 Li2TiO3. Voltammograms of electrodes containing materials A-Li3.8Cu0.1Ni0.1O12 and B-Li3.8Cu0.1Ni0.1O12 are correct and repeatable. Peak cathode occurs for both samples at a potential approx. 1.52±0.01 V relative to a lithium electrode, while the anodic peak at potential approx. 1.65±0.05 V relative to a lithium electrode. Voltammogram of Li3.8Cu0.1Ni0.1Ti5O12-C (especially for the first measurement cycle) is not correct. There are large variations in values of specific current, which are not characteristic for materials LTO. From the point of view of safety and environmentally friendly production of Li-ion cells eliminating soot and applying Li3.8Cu0.1Ni0.1Ti5O12-C as an active material of an anode in lithium-ion batteries seems to be a good alternative to currently used materials.Keywords: anode, Li-ion batteries, Li₄O₅O₁₂, spinel
Procedia PDF Downloads 150161 Approaches to Inducing Obsessional Stress in Obsessive-Compulsive Disorder (OCD): An Empirical Study with Patients Undergoing Transcranial Magnetic Stimulation (TMS) Therapy
Authors: Lucia Liu, Matthew Koziol
Abstract:
Obsessive-compulsive disorder (OCD), a long-lasting anxiety disorder involving recurrent, intrusive thoughts, affects over 2 million adults in the United States. Transcranial magnetic stimulation (TMS) stands out as a noninvasive, cutting-edge therapy that has been shown to reduce symptoms in patients with treatment-resistant OCD. The Food and Drug Administration (FDA) approved protocol pairs TMS sessions with individualized symptom provocation, aiming to improve the susceptibility of brain circuits to stimulation. However, limited standardization or guidance exists on how to conduct symptom provocation and which methods are most effective. This study aims to compare the effect of internal versus external techniques to induce obsessional stress in a clinical setting during TMS therapy. Two symptom provocation methods, (i) Asking patients thought-provoking questions about their obsessions (internal) and (ii) Requesting patients to perform obsession-related tasks (external), were employed in a crossover design with repeated measurement. Thirty-six treatments of NeuroStar TMS were administered to each of two patients over 8 weeks in an outpatient clinic. Patient One received 18 sessions of internal provocation followed by 18 sessions of external provocation, while Patient Two received 18 sessions of external provocation followed by 18 sessions of internal provocation. The primary outcome was the level of self-reported obsessional stress on a visual analog scale from 1 to 10. The secondary outcome was self-reported OCD severity, collected biweekly in a four-level Likert-scale (1 to 4) of bad, fair, good and excellent. Outcomes were compared and tested between provocation arms through repeated measures ANOVA, accounting for intra-patient correlations. Ages were 42 for Patient One (male, White) and 57 for Patient Two (male, White). Both patients had similar moderate symptoms at baseline, as determined through the Yale-Brown Obsessive Compulsive Scale (YBOCS). When comparing obsessional stress induced across the two arms of internal and external provocation methods, the mean (SD) was 6.03 (1.18) for internal and 4.01 (1.28) for external strategies (P=0.0019); ranges were 3 to 8 for internal and 2 to 8 for external strategies. Internal provocation yielded 5 (31.25%) bad, 6 (33.33%) fair, 3 (18.75%) good, and 2 (12.5%) excellent responses for OCD status, while external provocation yielded 5 (31.25%) bad, 9 (56.25%) fair, 1 (6.25%) good, and 1 (6.25%) excellent responses (P=0.58). Internal symptom provocation tactics had a significantly stronger impact on inducing obsessional stress and led to better OCD status (non-significant). This could be attributed to the fact that answering questions may prompt patients to reflect more on their lived experiences and struggles with OCD. In the future, clinical trials with larger sample sizes are warranted to validate this finding. Results support the increased integration of internal methods into structured provocation protocols, potentially reducing the time required for provocation and achieving greater treatment response to TMS.Keywords: obsessive-compulsive disorder, transcranial magnetic stimulation, mental health, symptom provocation
Procedia PDF Downloads 57160 Propagation of Ultra-High Energy Cosmic Rays through Extragalactic Magnetic Fields: An Exploratory Study of the Distance Amplification from Rectilinear Propagation
Authors: Rubens P. Costa, Marcelo A. Leigui de Oliveira
Abstract:
The comprehension of features on the energy spectra, the chemical compositions, and the origins of Ultra-High Energy Cosmic Rays (UHECRs) - mainly atomic nuclei with energies above ~1.0 EeV (exa-electron volts) - are intrinsically linked to the problem of determining the magnitude of their deflections in cosmic magnetic fields on cosmological scales. In addition, as they propagate from the source to the observer, modifications are expected in their original energy spectra, anisotropy, and the chemical compositions due to interactions with low energy photons and matter. This means that any consistent interpretation of the nature and origin of UHECRs has to include the detailed knowledge of their propagation in a three-dimensional environment, taking into account the magnetic deflections and energy losses. The parameter space range for the magnetic fields in the universe is very large because the field strength and especially their orientation have big uncertainties. Particularly, the strength and morphology of the Extragalactic Magnetic Fields (EGMFs) remain largely unknown, because of the intrinsic difficulty of observing them. Monte Carlo simulations of charged particles traveling through a simulated magnetized universe is the straightforward way to study the influence of extragalactic magnetic fields on UHECRs propagation. However, this brings two major difficulties: an accurate numerical modeling of charged particles diffusion in magnetic fields, and an accurate numerical modeling of the magnetized Universe. Since magnetic fields do not cause energy losses, it is important to impose that the particle tracking method conserve the particle’s total energy and that the energy changes are results of the interactions with background photons only. Hence, special attention should be paid to computational effects. Additionally, because of the number of particles necessary to obtain a relevant statistical sample, the particle tracking method must be computationally efficient. In this work, we present an analysis of the propagation of ultra-high energy charged particles in the intergalactic medium. The EGMFs are considered to be coherent within cells of 1 Mpc (mega parsec) diameter, wherein they have uniform intensities of 1 nG (nano Gauss). Moreover, each cell has its field orientation randomly chosen, and a border region is defined such that at distances beyond 95% of the cell radius from the cell center smooth transitions have been applied in order to avoid discontinuities. The smooth transitions are simulated by weighting the magnetic field orientation by the particle's distance to the two nearby cells. The energy losses have been treated in the continuous approximation parameterizing the mean energy loss per unit path length by the energy loss length. We have shown, for a particle with the typical energy of interest the integration method performance in the relative error of Larmor radius, without energy losses and the relative error of energy. Additionally, we plotted the distance amplification from rectilinear propagation as a function of the traveled distance, particle's magnetic rigidity, without energy losses, and particle's energy, with energy losses, to study the influence of particle's species on these calculations. The results clearly show when it is necessary to use a full three-dimensional simulation.Keywords: cosmic rays propagation, extragalactic magnetic fields, magnetic deflections, ultra-high energy
Procedia PDF Downloads 127159 Solid Polymer Electrolyte Membranes Based on Siloxane Matrix
Authors: Natia Jalagonia, Tinatin Kuchukhidze
Abstract:
Polymer electrolytes (PE) play an important part in electrochemical devices such as batteries and fuel cells. To achieve optimal performance, the PE must maintain a high ionic conductivity and mechanical stability at both high and low relative humidity. The polymer electrolyte also needs to have excellent chemical stability for long and robustness. According to the prevailing theory, ionic conduction in polymer electrolytes is facilitated by the large-scale segmental motion of the polymer backbone, and primarily occurs in the amorphous regions of the polymer electrolyte. Crystallinity restricts polymer backbone segmental motion and significantly reduces conductivity. Consequently, polymer electrolytes with high conductivity at room temperature have been sought through polymers which have highly flexible backbones and have largely amorphous morphology. The interest in polymer electrolytes was increased also by potential applications of solid polymer electrolytes in high energy density solid state batteries, gas sensors and electrochromic windows. Conductivity of 10-3 S/cm is commonly regarded as a necessary minimum value for practical applications in batteries. At present, polyethylene oxide (PEO)-based systems are most thoroughly investigated, reaching room temperature conductivities of 10-7 S/cm in some cross-linked salt in polymer systems based on amorphous PEO-polypropylene oxide copolymers.. It is widely accepted that amorphous polymers with low glass transition temperatures Tg and a high segmental mobility are important prerequisites for high ionic conductivities. Another necessary condition for high ionic conductivity is a high salt solubility in the polymer, which is most often achieved by donors such as ether oxygen or imide groups on the main chain or on the side groups of the PE. It is well established also that lithium ion coordination takes place predominantly in the amorphous domain, and that the segmental mobility of the polymer is an important factor in determining the ionic mobility. Great attention was pointed to PEO-based amorphous electrolyte obtained by synthesis of comb-like polymers, by attaching short ethylene oxide unit sequences to an existing amorphous polymer backbone. The aim of presented work is to obtain of solid polymer electrolyte membranes using PMHS as a matrix. For this purpose the hydrosilylation reactions of α,ω-bis(trimethylsiloxy)methyl¬hydrosiloxane with allyl triethylene-glycol mo¬nomethyl ether and vinyltriethoxysilane at 1:28:7 ratio of initial com¬pounds in the presence of Karstedt’s catalyst, platinum hydrochloric acid (0.1 M solution in THF) and platinum on the carbon catalyst in 50% solution of anhydrous toluene have been studied. The synthesized olygomers are vitreous liquid products, which are well soluble in organic solvents with specific viscosity ηsp ≈ 0.05 - 0.06. The synthesized olygomers were analysed with FTIR, 1H, 13C, 29Si NMR spectroscopy. Synthesized polysiloxanes were investigated with wide-angle X-ray, gel-permeation chromatography, and DSC analyses. Via sol-gel processes of doped with lithium trifluoromethylsulfonate (triflate) or lithium bis¬(trifluoromethylsulfonyl)¬imide polymer systems solid polymer electrolyte membranes have been obtained. The dependence of ionic conductivity as a function of temperature and salt concentration was investigated and the activation energies of conductivity for all obtained compounds are calculatedKeywords: synthesis, PMHS, membrane, electrolyte
Procedia PDF Downloads 257158 Application of Laser-Induced Breakdown Spectroscopy for the Evaluation of Concrete on the Construction Site and in the Laboratory
Authors: Gerd Wilsch, Tobias Guenther, Tobias Voelker
Abstract:
In view of the ageing of vital infrastructure facilities, a reliable condition assessment of concrete structures is becoming of increasing interest for asset owners to plan timely and appropriate maintenance and repair interventions. For concrete structures, reinforcement corrosion induced by penetrating chlorides is the dominant deterioration mechanism affecting the serviceability and, eventually, structural performance. The determination of the quantitative chloride ingress is required not only to provide valuable information on the present condition of a structure, but the data obtained can also be used for the prediction of its future development and associated risks. At present, wet chemical analysis of ground concrete samples by a laboratory is the most common test procedure for the determination of the chloride content. As the chloride content is expressed by the mass of the binder, the analysis should involve determination of both the amount of binder and the amount of chloride contained in a concrete sample. This procedure is laborious, time-consuming, and costly. The chloride profile obtained is based on depth intervals of 10 mm. LIBS is an economically viable alternative providing chloride contents at depth intervals of 1 mm or less. It provides two-dimensional maps of quantitative element distributions and can locate spots of higher concentrations like in a crack. The results are correlated directly to the mass of the binder, and it can be applied on-site to deliver instantaneous results for the evaluation of the structure. Examples for the application of the method in the laboratory for the investigation of diffusion and migration of chlorides, sulfates, and alkalis are presented. An example for the visualization of the Li transport in concrete is also shown. These examples show the potential of the method for a fast, reliable, and automated two-dimensional investigation of transport processes. Due to the better spatial resolution, more accurate input parameters for model calculations are determined. By the simultaneous detection of elements such as carbon, chlorine, sodium, and potassium, the mutual influence of the different processes can be determined in only one measurement. Furthermore, the application of a mobile LIBS system in a parking garage is demonstrated. It uses a diode-pumped low energy laser (3 mJ, 1.5 ns, 100 Hz) and a compact NIR spectrometer. A portable scanner allows a two-dimensional quantitative element mapping. Results show the quantitative chloride analysis on wall and floor surfaces. To determine the 2-D distribution of harmful elements (Cl, C), concrete cores were drilled, split, and analyzed directly on-site. Results obtained were compared and verified with laboratory measurements. The results presented show that the LIBS method is a valuable addition to the standard procedures - the wet chemical analysis of ground concrete samples. Currently, work is underway to develop a technical code of practice for the application of the method for the determination of chloride concentration in concrete.Keywords: chemical analysis, concrete, LIBS, spectroscopy
Procedia PDF Downloads 105157 The Prevalence of Soil Transmitted Helminths among Newly Arrived Expatriate Labors in Jeddah, Saudi Arabia
Authors: Mohammad Al-Refai, Majed Wakid
Abstract:
Introduction: Soil-transmitted diseases (STD) are caused by intestinal worms that are transmitted via various routes into the human body resulting in various clinical manifestations. The intestinal worms causing these infections are known as soil transmitted helminths (STH), including Hook worms, Ascaris lumbricoides (A. lumbricoides), Trichuris trichiura (T. trichiura), and Strongyloides sterocoralis (S. sterocoralis). Objectives: The aim of this study was to investigate the prevalence of STH among newly arrived expatriate labors in Jeddah city, Saudi Arabia, using three different techniques (direct smears, sedimentation concentration, and real-time PCR). Methods: A total of 188 stool specimens were collected and investigated at the parasitology laboratory in the Special Infectious Agents Unit at King Fahd Medical Research Center, King Abdulaziz University in Jeddah, Saudi Arabia. Microscopic examination of wet mount preparations using normal saline and Lugols Iodine was carried out, followed by the formal ether sedimentation method. In addition, real-time PCR was used as a molecular tool to detect several STH and hookworm speciation. Results: Out of 188 stool specimens analyzed, in addition to STH parasite, several other types were detected. 9 samples (4.79%) were positive for Entamoeba coli, 7 samples (3.72%) for T. trichiura, 6 samples (3.19%) for Necator americanus, 4 samples (2.13%) for S. sterocoralis, 4 samples (2.13%) for A. lumbricoides, 4 samples (2.13%) for E. histolytica, 3 samples (1.60%) for Blastocystis hominis, 2 samples (1.06%) for Ancylostoma duodenale, 2 samples (1.06%) for Giardia lamblia, 1 sample (0.53%) for Iodamoeba buetschlii, 1 sample (0.53%) for Hymenolepis nana, 1 sample (0.53%) for Endolimax nana, and 1 sample (0.53%) for Heterophyes heterophyes. Out of the 35 infected cases, 26 revealed single infection, 8 with double infections, and only one triple infection of different STH species and other intestinal parasites. Higher rates of STH infections were detected among housemaids (11 cases) followed by drivers (7 cases) when compared to other occupations. According to educational level, illiterate participants represent the majority of infected workers (12 cases). The majority of workers' positive cases were from the Philippines. In comparison between laboratory techniques, out of the 188 samples screened for STH, real-time PCR was able to detect the DNA in (19/188) samples followed by Ritchie sedimentation technique (18/188), and direct wet smear (7/188). Conclusion: STH infections are a major public health issue to healthcare systems around the world. Communities must be educated on hygiene practices and the severity of such parasites to human health. As far as drivers and housemaids come to close contact with families, including children and elderlies. This may put family members at risk of developing serious side effects related to STH, especially as the majority of workers were illiterate, lacking the basic hygiene knowledge and practices. We recommend the official authority in Jeddah and around the kingdom of Saudi Arabia to revise the standard screening tests for newly arrived workers and enforce regular follow-up inspections to minimize the chances of the spread of STH from expatriate workers to the public.Keywords: expatriate labors, Jeddah, prevalence, soil transmitted helminths
Procedia PDF Downloads 149156 Estimating Multidimensional Water Poverty Index in India: The Alkire Foster Approach
Authors: Rida Wanbha Nongbri, Sabuj Kumar Mandal
Abstract:
The Sustainable Development Goals (SDGs) for 2016-2030 were adopted in response to Millennium Development Goals (MDGs) which focused on access to sustainable water and sanitations. For over a decade, water has been a significant subject that is explored in various facets of life. Our day-to-day life is significantly impacted by water poverty at the socio-economic level. Reducing water poverty is an important policy challenge, particularly in emerging economies like India, owing to its population growth, huge variation in topology and climatic factors. To design appropriate water policies and its effectiveness, a proper measurement of water poverty is essential. In this backdrop, this study uses the Alkire Foster (AF) methodology to estimate a multidimensional water poverty index for India at the household level. The methodology captures several attributes to understand the complex issues related to households’ water deprivation. The study employs two rounds of Indian Human Development Survey data (IHDS 2005 and 2012) which focuses on 4 dimensions of water poverty including water access, water quantity, water quality, and water capacity, and seven indicators capturing these four dimensions. In order to quantify water deprivation at the household level, an AF dual cut-off counting method is applied and Multidimensional Water Poverty Index (MWPI) is calculated as the product of Headcount Ratio (Incidence) and average share of weighted dimension (Intensity). The results identify deprivation across all dimensions at the country level and show that a large proportion of household in India is deprived of quality water and suffers from water access in both 2005 and 2012 survey rounds. The comparison between the rural and urban households shows that higher ratio of the rural households are multidimensionally water poor as compared to their urban counterparts. Among the four dimensions of water poverty, water quality is found to be the most significant one for both rural and urban households. In 2005 round, almost 99.3% of households are water poor for at least one of the four dimensions, and among the water poor households, the intensity of water poverty is 54.7%. These values do not change significantly in 2012 round, but we could observe significance differences across the dimensions. States like Bihar, Tamil Nadu, and Andhra Pradesh are ranked the most in terms of MWPI, whereas Sikkim, Arunachal Pradesh and Chandigarh are ranked the lowest in 2005 round. Similarly, in 2012 round, Bihar, Uttar Pradesh and Orissa rank the highest in terms of MWPI, whereas Goa, Nagaland and Arunachal Pradesh rank the lowest. The policy implications of this study can be multifaceted. It can urge the policy makers to focus either on the impoverished households with lower intensity levels of water poverty to minimize total number of water poor households or can focus on those household with high intensity of water poverty to achieve an overall reduction in MWPI.Keywords: .alkire-foster (AF) methodology, deprivation, dual cut-off, multidimensional water poverty index (MWPI)
Procedia PDF Downloads 70155 Closing the Gap: Efficient Voxelization with Equidistant Scanlines and Gap Detection
Authors: S. Delgado, C. Cerrada, R. S. Gómez
Abstract:
This research introduces an approach to voxelizing the surfaces of triangular meshes with efficiency and accuracy. Our method leverages parallel equidistant scan-lines and introduces a Gap Detection technique to address the limitations of existing approaches. We present a comprehensive study showcasing the method's effectiveness, scalability, and versatility in different scenarios. Voxelization is a fundamental process in computer graphics and simulations, playing a pivotal role in applications ranging from scientific visualization to virtual reality. Our algorithm focuses on enhancing the voxelization process, especially for complex models and high resolutions. One of the major challenges in voxelization in the Graphics Processing Unit (GPU) is the high cost of discovering the same voxels multiple times. These repeated voxels incur in costly memory operations with no useful information. Our scan-line-based method ensures that each voxel is detected exactly once when processing the triangle, enhancing performance without compromising the quality of the voxelization. The heart of our approach lies in the use of parallel, equidistant scan-lines to traverse the interiors of triangles. This minimizes redundant memory operations and avoids revisiting the same voxels, resulting in a significant performance boost. Moreover, our method's computational efficiency is complemented by its simplicity and portability. Written as a single compute shader in Graphics Library Shader Language (GLSL), it is highly adaptable to various rendering pipelines and hardware configurations. To validate our method, we conducted extensive experiments on a diverse set of models from the Stanford repository. Our results demonstrate not only the algorithm's efficiency, but also its ability to produce 26 tunnel free accurate voxelizations. The Gap Detection technique successfully identifies and addresses gaps, ensuring consistent and visually pleasing voxelized surfaces. Furthermore, we introduce the Slope Consistency Value metric, quantifying the alignment of each triangle with its primary axis. This metric provides insights into the impact of triangle orientation on scan-line based voxelization methods. It also aids in understanding how the Gap Detection technique effectively improves results by targeting specific areas where simple scan-line-based methods might fail. Our research contributes to the field of voxelization by offering a robust and efficient approach that overcomes the limitations of existing methods. The Gap Detection technique fills a critical gap in the voxelization process. By addressing these gaps, our algorithm enhances the visual quality and accuracy of voxelized models, making it valuable for a wide range of applications. In conclusion, "Closing the Gap: Efficient Voxelization with Equidistant Scan-lines and Gap Detection" presents an effective solution to the challenges of voxelization. Our research combines computational efficiency, accuracy, and innovative techniques to elevate the quality of voxelized surfaces. With its adaptable nature and valuable innovations, this technique could have a positive influence on computer graphics and visualization.Keywords: voxelization, GPU acceleration, computer graphics, compute shaders
Procedia PDF Downloads 73154 Protected Cultivation of Horticultural Crops: Increases Productivity per Unit of Area and Time
Authors: Deepak Loura
Abstract:
The most contemporary method of producing horticulture crops both qualitatively and quantitatively is protected cultivation, or greenhouse cultivation, which has gained widespread acceptance in recent decades. Protected farming, commonly referred to as controlled environment agriculture (CEA), is extremely productive, land- and water-wise, as well as environmentally friendly. The technology entails growing horticulture crops in a controlled environment where variables such as temperature, humidity, light, soil, water, fertilizer, etc. are adjusted to achieve optimal output and enable a consistent supply of them even during the off-season. Over the past ten years, protected cultivation of high-value crops and cut flowers has demonstrated remarkable potential. More and more agricultural and horticultural crop production systems are moving to protected environments as a result of the growing demand for high-quality products by global markets. By covering the crop, it is possible to control the macro- and microenvironments, enhancing plant performance and allowing for longer production times, earlier harvests, and higher yields of higher quality. These shielding features alter the environment of the plant while also offering protection from wind, rain, and insects. Protected farming opens up hitherto unexplored opportunities in agriculture as the liberalised economy and improved agricultural technologies advance. Typically, the revenues from fruit, vegetable, and flower crops are 4 to 8 times higher than those from other crops. If any of these high-value crops are cultivated in protected environments like greenhouses, net houses, tunnels, etc., this profit can be multiplied. Vegetable and cut flower post-harvest losses are extremely high (20–0%), however sheltered growing techniques and year-round cropping can greatly minimize post-harvest losses and enhance yield by 5–10 times. Seasonality and weather have a big impact on the production of vegetables and flowers. The variety of their products results in significant price and quality changes for vegetables. For the application of current technology in crop production, achieving a balance between year-round availability of vegetables and flowers with minimal environmental impact and remaining competitive is a significant problem. The future of agriculture will be protected since population growth is reducing the amount of land that may be held. Protected agriculture is a particularly profitable endeavor for tiny landholdings. Small greenhouses, net houses, nurseries, and low tunnel greenhouses can all be built by farmers to increase their income. Protected agriculture is also aided by the rise in biotic and abiotic stress factors. As a result of the greater productivity levels, these technologies are not only opening up opportunities for producers with larger landholdings, but also for those with smaller holdings. Protected cultivation can be thought of as a kind of precise, forward-thinking, parallel agriculture that covers almost all aspects of farming and is rather subject to additional inspection for technical applicability to circumstances, farmer economics, and market economics.Keywords: protected cultivation, horticulture, greenhouse, vegetable, controlled environment agriculture
Procedia PDF Downloads 76153 Photonic Dual-Microcomb Ranging with Extreme Speed Resolution
Authors: R. R. Galiev, I. I. Lykov, A. E. Shitikov, I. A. Bilenko
Abstract:
Dual-comb interferometry is based on the mixing of two optical frequency combs with slightly different lines spacing which results in the mapping of the optical spectrum into the radio-frequency domain for future digitizing and numerical processing. The dual-comb approach enables diverse applications, including metrology, fast high-precision spectroscopy, and distance range. Ordinary frequency-modulated continuous-wave (FMCW) laser-based Light Identification Detection and Ranging systems (LIDARs) suffer from two main disadvantages: slow and unreliable mechanical, spatial scan and a rather wide linewidth of conventional lasers, which limits speed measurement resolution. Dual-comb distance measurements with Allan deviations down to 12 nanometers at averaging times of 13 microseconds, along with ultrafast ranging at acquisition rates of 100 megahertz, allowing for an in-flight sampling of gun projectiles moving at 150 meters per second, was previously demonstrated. Nevertheless, pump lasers with EDFA amplifiers made the device bulky and expensive. An alternative approach is a direct coupling of the laser to a reference microring cavity. Backscattering can tune the laser to the eigenfrequency of the cavity via the so-called self-injection locked (SIL) effect. Moreover, the nonlinearity of the cavity allows a solitonic frequency comb generation in the very same cavity. In this work, we developed a fully integrated, power-efficient, electrically driven dual-micro comb source based on the semiconductor lasers SIL to high-quality integrated Si3N4 microresonators. We managed to obtain robust 1400-1700 nm combs generation with a 150 GHz or 1 THz lines spacing and measure less than a 1 kHz Lorentzian withs of stable, MHz spaced beat notes in a GHz band using two separated chips, each pumped by its own, self-injection locked laser. A deep investigation of the SIL dynamic allows us to find out the turn-key operation regime even for affordable Fabry-Perot multifrequency lasers used as a pump. It is important that such lasers are usually more powerful than DFB ones, which were also tested in our experiments. In order to test the advantages of the proposed techniques, we experimentally measured a minimum detectable speed of a reflective object. It has been shown that the narrow line of the laser locked to the microresonator provides markedly better velocity accuracy, showing velocity resolution down to 16 nm/s, while the no-SIL diode laser only allowed 160 nm/s with good accuracy. The results obtained are in agreement with the estimations and open up ways to develop LIDARs based on compact and cheap lasers. Our implementation uses affordable components, including semiconductor laser diodes and commercially available silicon nitride photonic circuits with microresonators.Keywords: dual-comb spectroscopy, LIDAR, optical microresonator, self-injection locking
Procedia PDF Downloads 73152 Application of Alumina-Aerogel in Post-Combustion CO₂ Capture: Optimization by Response Surface Methodology
Authors: S. Toufigh Bararpour, Davood Karami, Nader Mahinpey
Abstract:
Dependence of global economics on fossil fuels has led to a large growth in the emission of greenhouse gases (GHGs). Among the various GHGs, carbon dioxide is the main contributor to the greenhouse effect due to its huge emission amount. To mitigate the threatening effect of CO₂, carbon capture and sequestration (CCS) technologies have been studied widely in recent years. For the combustion processes, three main CO₂ capture techniques have been proposed such as post-combustion, pre-combustion and oxyfuel combustion. Post-combustion is the most commonly used CO₂ capture process as it can be readily retrofit into the existing power plants. Multiple advantages have been reported for the post-combustion by solid sorbents such as high CO₂ selectivity, high adsorption capacity, and low required regeneration energy. Chemical adsorption of CO₂ over alkali-metal-based solid sorbents such as K₂CO₃ is a promising method for the selective capture of diluted CO₂ from the huge amount of nitrogen existing in the flue gas. To improve the CO₂ capture performance, K₂CO₃ is supported by a stable and porous material. Al₂O₃ has been employed commonly as the support and enhanced the cyclic CO₂ capture efficiency of K₂CO₃. Different phases of alumina can be obtained by setting the calcination temperature of boehmite at 300, 600 (γ-alumina), 950 (δ-alumina) and 1200 °C (α-alumina). By increasing the calcination temperature, the regeneration capacity of alumina increases, while the surface area reduces. However, sorbents with lower surface areas have lower CO₂ capture capacity as well (except for the sorbents prepared by hydrophilic support materials). To resolve this issue, a highly efficient alumina-aerogel support was synthesized with a BET surface area of over 2000 m²/g and then calcined at a high temperature. The synthesized alumina-aerogel was impregnated on K₂CO₃ based on 50 wt% support/K₂CO₃, which resulted in the preparation of a sorbent with remarkable CO₂ capture performance. The effect of synthesis conditions such as types of alcohols, solvent-to-co-solvent ratios, and aging times was investigated on the performance of the support. The best support was synthesized using methanol as the solvent, after five days of aging time, and at a solvent-to-co-solvent (methanol-to-toluene) ratio (v/v) of 1/5. Response surface methodology was used to investigate the effect of operating parameters such as carbonation temperature and H₂O-to-CO₂ flowrate ratio on the CO₂ capture capacity. The maximum CO₂ capture capacity, at the optimum amounts of operating parameters, was 7.2 mmol CO₂ per gram K₂CO₃. Cyclic behavior of the sorbent was examined over 20 carbonation and regenerations cycles. The alumina-aerogel-supported K₂CO₃ showed a great performance compared to unsupported K₂CO₃ and γ-alumina-supported K₂CO₃. Fundamental performance analyses and long-term thermal and chemical stability test will be performed on the sorbent in the future. The applicability of the sorbent for a bench-scale process will be evaluated, and a corresponding process model will be established. The fundamental material knowledge and respective process development will be delivered to industrial partners for the design of a pilot-scale testing unit, thereby facilitating the industrial application of alumina-aerogel.Keywords: alumina-aerogel, CO₂ capture, K₂CO₃, optimization
Procedia PDF Downloads 116151 Physical Activity Based on Daily Step-Count in Inpatient Setting in Stroke and Traumatic Brain Injury Patients in Subacute Stage Follow Up: A Cross-Sectional Observational Study
Authors: Brigitte Mischler, Marget Hund, Hilfiker Roger, Clare Maguire
Abstract:
Background: Brain injury is one of the main causes of permanent physical disability, and improving walking ability is one of the most important goals for patients. After inpatient rehabilitation, most do not receive long-term rehabilitation services. Physical activity is important for the health prevention of the musculoskeletal system, circulatory system and the psyche. Objective: This follow-up study measured physical activity in subacute patients after traumatic brain injury and stroke. The difference in the number of steps in the inpatient setting was compared to the number of steps 1 year after the event in the outpatient setting. Methods: This follow-up study is a cross-sectional observational study with 29 participants. The measurement of daily step count over a seven-day period one year after the event was evaluated with the StepWatch™ ankle sensor. The number of steps taken one year after the event in the outpatient setting was compared with the number of steps taken during the inpatient stay and evaluated if they reached the recommended target value. Correlations between steps-count and exit domain, FAC level, walking speed, light touch, joint position sense, cognition, and fear of falling were calculated. Results: The median (IQR) daily step count of all patients was 2512 (568.5, 4070.5). During follow-up, the number of steps improved to 3656(1710,5900). The average difference was 1159(-2825, 6840) steps per day. Participants who were unable to walk independently (FAC 1) improved from 336(5-705) to 1808(92, 5354) steps per day. Participants able to walk with assistance (FAC 2-3) walked 700(31-3080) and at follow-up 3528(243,6871). Independent walkers (FAC 4-5) walked 4093(2327-5868) and achieved 3878(777,7418) daily steps at follow-up. This value is significantly below the recommended guideline. Step-count at follow-up showed moderate to high and statistically significant correlations: positive for FAC score, positive for FIM total score, positive for walking speed, and negative for fear of falling. Conclusions: Only 17% of all participants achieved the recommended daily step count one year after the event. We need better inpatient and outpatient strategies to improve physical activity. In everyday clinical practice, pedometers and diaries with objectives should be used. A concrete weekly schedule should be drawn up together with the patient, relatives, or nursing staff after discharge. This should include daily self-training, which was instructed during the inpatient stay. A good connection to social life (professional connection or a daily task/activity) can be an important part of improving daily activity. Further research should evaluate strategies to increase daily step counts in inpatient settings as well as in outpatient settings.Keywords: neurorehabilitation, stroke, traumatic brain injury, steps, stepcount
Procedia PDF Downloads 15150 Comparison of Artificial Neural Networks and Statistical Classifiers in Olive Sorting Using Near-Infrared Spectroscopy
Authors: İsmail Kavdır, M. Burak Büyükcan, Ferhat Kurtulmuş
Abstract:
Table olive is a valuable product especially in Mediterranean countries. It is usually consumed after some fermentation process. Defects happened naturally or as a result of an impact while olives are still fresh may become more distinct after processing period. Defected olives are not desired both in table olive and olive oil industries as it will affect the final product quality and reduce market prices considerably. Therefore it is critical to sort table olives before processing or even after processing according to their quality and surface defects. However, doing manual sorting has many drawbacks such as high expenses, subjectivity, tediousness and inconsistency. Quality criterions for green olives were accepted as color and free of mechanical defects, wrinkling, surface blemishes and rotting. In this study, it was aimed to classify fresh table olives using different classifiers and NIR spectroscopy readings and also to compare the classifiers. For this purpose, green (Ayvalik variety) olives were classified based on their surface feature properties such as defect-free, with bruised defect and with fly defect using FT-NIR spectroscopy and classification algorithms such as artificial neural networks, ident and cluster. Bruker multi-purpose analyzer (MPA) FT-NIR spectrometer (Bruker Optik, GmbH, Ettlingen Germany) was used for spectral measurements. The spectrometer was equipped with InGaAs detectors (TE-InGaAs internal for reflectance and RT-InGaAs external for transmittance) and a 20-watt high intensity tungsten–halogen NIR light source. Reflectance measurements were performed with a fiber optic probe (type IN 261) which covered the wavelengths between 780–2500 nm, while transmittance measurements were performed between 800 and 1725 nm. Thirty-two scans were acquired for each reflectance spectrum in about 15.32 s while 128 scans were obtained for transmittance in about 62 s. Resolution was 8 cm⁻¹ for both spectral measurement modes. Instrument control was done using OPUS software (Bruker Optik, GmbH, Ettlingen Germany). Classification applications were performed using three classifiers; Backpropagation Neural Networks, ident and cluster classification algorithms. For these classification applications, Neural Network tool box in Matlab, ident and cluster modules in OPUS software were used. Classifications were performed considering different scenarios; two quality conditions at once (good vs bruised, good vs fly defect) and three quality conditions at once (good, bruised and fly defect). Two spectrometer readings were used in classification applications; reflectance and transmittance. Classification results obtained using artificial neural networks algorithm in discriminating good olives from bruised olives, from olives with fly defect and from the olive group including both bruised and fly defected olives with success rates respectively changing between 97 and 99%, 61 and 94% and between 58.67 and 92%. On the other hand, classification results obtained for discriminating good olives from bruised ones and also for discriminating good olives from fly defected olives using the ident method ranged between 75-97.5% and 32.5-57.5%, respectfully; results obtained for the same classification applications using the cluster method ranged between 52.5-97.5% and between 22.5-57.5%.Keywords: artificial neural networks, statistical classifiers, NIR spectroscopy, reflectance, transmittance
Procedia PDF Downloads 246149 Alternative Energy and Carbon Source for Biosurfactant Production
Authors: Akram Abi, Mohammad Hossein Sarrafzadeh
Abstract:
Because of their several advantages over chemical surfactants, biosurfactants have given rise to a growing interest in the past decades. Advantages such as lower toxicity, higher biodegradability, higher selectivity and applicable at extreme temperature and pH which enables them to be used in a variety of applications such as: enhanced oil recovery, environmental and pharmaceutical applications, etc. Bacillus subtilis produces a cyclic lipopeptide, called surfactin, which is one of the most powerful biosurfactants with ability to decrease surface tension of water from 72 mN/m to 27 mN/m. In addition to its biosurfactant character, surfactin exhibits interesting biological activities such as: inhibition of fibrin clot formation, lyses of erythrocytes and several bacterial spheroplasts, antiviral, anti-tumoral and antibacterial properties. Surfactin is an antibiotic substance and has been shown recently to possess anti-HIV activity. However, application of biosurfactants is limited by their high production cost. The cost can be reduced by optimizing biosurfactant production using cheap feed stock. Utilization of inexpensive substrates and unconventional carbon sources like urban or agro-industrial wastes is a promising strategy to decrease the production cost of biosurfactants. With suitable engineering optimization and microbiological modifications, these wastes can be used as substrates for large-scale production of biosurfactants. As an effort to fulfill this purpose, in this work we have tried to utilize olive oil as second carbon source and also yeast extract as second nitrogen source to investigate the effect on both biomass and biosurfactant production improvement in Bacillus subtilis cultures. Since the turbidity of the culture was affected by presence of the oil, optical density was compromised and no longer could be used as an index of growth and biomass concentration. Therefore, cell Dry Weight measurements with applying necessary tactics for removing oil drops to prevent interference with biomass weight were carried out to monitor biomass concentration during the growth of the bacterium. The surface tension and critical micelle dilutions (CMD-1, CMD-2) were considered as an indirect measurement of biosurfactant production. Distinctive and promising results were obtained in the cultures containing olive oil compared to cultures without it: more than two fold increase in biomass production (from 2 g/l to 5 g/l) and considerable reduction in surface tension, down to 40 mN/m at surprisingly early hours of culture time (only 5hr after inoculation). This early onset of biosurfactant production in this culture is specially interesting when compared to the conventional cultures at which this reduction in surface tension is not obtained until 30 hour of culture time. Reducing the production time is a very prominent result to be considered for large scale process development. Furthermore, these results can be used to develop strategies for utilization of agro-industrial wastes (such as olive oil mill residue, molasses, etc.) as cheap and easily accessible feed stocks to decrease the high costs of biosurfactant production.Keywords: agro-industrial waste, bacillus subtilis, biosurfactant, fermentation, second carbon and nitrogen source, surfactin
Procedia PDF Downloads 301148 Modern Architecture and the Scientific World Conception
Authors: Sean Griffiths
Abstract:
Introduction: This paper examines the expression of ‘objectivity’ in architecture in the context of the post-war rejection of this concept. It aims to re-examine the question in light of the assault on truth characterizing contemporary culture and of the unassailable truth of the climate emergency. The paper analyses the search for objective truth as it was prosecuted in the Modern Movement in the early 20th century, looking at the extent to which this quest was successful in contributing to the development of a radically new, politically-informed architecture and the extent to which its particular interpretation of objectivity, limited that development. The paper studies the influence of the Vienna Circle philosophers Rudolph Carnap and Otto Neurath on the pedagogy of the Bauhaus and the architecture of the Neue Sachlichkeit in Germany. Their logical positivism sought to determine objective truths through empirical analysis, expressed in an austere formal language as part of a ‘scientific world conception’ which would overcome metaphysics and unverifiable mystification. These ideas, and the concurrent prioritizing of measurement as the determinant of environmental quality, became key influences in the socially-driven architecture constructed in the 1920s and 30s by Bauhaus architects in numerous German Cities. Methodology: The paper reviews the history of the early Modern Movement and summarizes accounts of the relationship between the Vienna Circle and the Bauhaus. It looks at key differences in the approaches Neurath and Carnap took to the achievement of their shared philosophical and political aims. It analyses how the adoption of Carnap’s foundationalism influenced the architectural language of modern architecture and compares, through a close reading of the structure of Neurath’s ‘protocol sentences,’ the latter’s alternative approach, speculating on the possibility that its adoption offered a different direction of travel for Modern Architecture. Findings: The paper finds that the adoption of Carnap’s foundationalism, while helping Modern Architecture forge a new visual language, ultimately limited its development and is implicated in its failure to escape the very metaphysics against which it had set itself. It speculates that Neurath’s relational language-based approach to the issue of establishing objectivity has its architectural corollary in the process of revision and renovation that offers new ways an ‘objective’ language of architecture might be developed in a manner that is more responsive to our present-day crisis. Conclusion: The philosophical principles of the Vienna Circle and the architects of the Modern Movement had much in common. Both contributed to radical historical departures which sought to instantiate a world scientific conception in their respective fields, which would attempt to banish mystification and metaphysics and would align itself with socialism. However, in adopting Carnap’s foundationalism as the theoretical basis for the new architecture, Modern Architecture not only failed to escape metaphysics but arguably closed off new avenues of development to itself. The adoption of Neurath’s more open-ended and interactive approach to objectivity offers possibilities for new conceptions of the expression of objectivity in architecture that might be more tailored to the multiple crises we face today.Keywords: Bauhaus, logical positivism, Neue Sachlichkeit, rationalism, Vienna Circle
Procedia PDF Downloads 87147 Hybrid Data-Driven Drilling Rate of Penetration Optimization Scheme Guided by Geological Formation and Historical Data
Authors: Ammar Alali, Mahmoud Abughaban, William Contreras Otalvora
Abstract:
Optimizing the drilling process for cost and efficiency requires the optimization of the rate of penetration (ROP). ROP is the measurement of the speed at which the wellbore is created, in units of feet per hour. It is the primary indicator of measuring drilling efficiency. Maximization of the ROP can indicate fast and cost-efficient drilling operations; however, high ROPs may induce unintended events, which may lead to nonproductive time (NPT) and higher net costs. The proposed ROP optimization solution is a hybrid, data-driven system that aims to improve the drilling process, maximize the ROP, and minimize NPT. The system consists of two phases: (1) utilizing existing geological and drilling data to train the model prior, and (2) real-time adjustments of the controllable dynamic drilling parameters [weight on bit (WOB), rotary speed (RPM), and pump flow rate (GPM)] that direct influence on the ROP. During the first phase of the system, geological and historical drilling data are aggregated. After, the top-rated wells, as a function of high instance ROP, are distinguished. Those wells are filtered based on NPT incidents, and a cross-plot is generated for the controllable dynamic drilling parameters per ROP value. Subsequently, the parameter values (WOB, GPM, RPM) are calculated as a conditioned mean based on physical distance, following Inverse Distance Weighting (IDW) interpolation methodology. The first phase is concluded by producing a model of drilling best practices from the offset wells, prioritizing the optimum ROP value. This phase is performed before the commencing of drilling. Starting with the model produced in phase one, the second phase runs an automated drill-off test, delivering live adjustments in real-time. Those adjustments are made by directing the driller to deviate two of the controllable parameters (WOB and RPM) by a small percentage (0-5%), following the Constrained Random Search (CRS) methodology. These minor incremental variations will reveal new drilling conditions, not explored before through offset wells. The data is then consolidated into a heat-map, as a function of ROP. A more optimum ROP performance is identified through the heat-map and amended in the model. The validation process involved the selection of a planned well in an onshore oil field with hundreds of offset wells. The first phase model was built by utilizing the data points from the top-performing historical wells (20 wells). The model allows drillers to enhance decision-making by leveraging existing data and blending it with live data in real-time. An empirical relationship between controllable dynamic parameters and ROP was derived using Artificial Neural Networks (ANN). The adjustments resulted in improved ROP efficiency by over 20%, translating to at least 10% saving in drilling costs. The novelty of the proposed system lays is its ability to integrate historical data, calibrate based geological formations, and run real-time global optimization through CRS. Those factors position the system to work for any newly drilled well in a developing field event.Keywords: drilling optimization, geological formations, machine learning, rate of penetration
Procedia PDF Downloads 131146 Climate Change Adaptation Success in a Low Income Country Setting, Bangladesh
Authors: Tanveer Ahmed Choudhury
Abstract:
Background: Bangladesh is one of the largest deltas in the world, with high population density and high rates of poverty and illiteracy. 80% of the country is on low-lying floodplains, leaving the country one of the most vulnerable to the adverse effects of climate change: sea level rise, cyclones and storms, salinity intrusion, rising temperatures and heavy monsoon downpours. Such climatic events already limit Economic Development in the country. Although Bangladesh has had little responsibility in contributing to global climatic change, it is vulnerable to both its direct and indirect impacts. Real threats include reduced agricultural production, worsening food security, increased incidence of flooding and drought, spreading disease and an increased risk of conflict over scarce land and water resources. Currently, 8.3 million Bangladeshis live in cyclone high risk areas. However, by 2050 this is expected to grow to 20.3 million people, if proper adaptive actions are not taken. Under a high emissions scenario, an additional 7.6 million people will be exposed to very high salinity by 2050 compared to current levels. It is also projected that, an average of 7.2 million people will be affected by flooding due to sea level rise every year between 2070-2100 and If global emissions decrease rapidly and adaptation interventions are taken, the population affected by flooding could be limited to only about 14,000 people. To combat the climate change adverse effects, Bangladesh government has initiated many adaptive measures specially in infrastructure and renewable energy sector. Government is investing huge money and initiated many projects which have been proved very success full. Objectives: The objective of this paper is to describe some successful measures initiated by Bangladesh government in its effort to make the country a Climate Resilient. Methodology: Review of operation plan and activities of different relevant Ministries of Bangladesh government. Result: The following initiative projects, programs and activities are considered as best practices for Climate Change adaptation successes for Bangladesh: 1. The Infrastructure Development Company Limited (IDCOL); 2. Climate Change and Health Promotion Unit (CCHPU); 3. The Climate Change Trust Fund (CCTF); 4. Community Climate Change Project (CCCP); 5. Health, Population, Nutrition Sector Development Program (HPNSDP, 2011-2016)- "Climate Change and Environmental Issues"; 6. Ministry of Health and Family Welfare, Bangladesh and WHO Collaboration; - National Adaptation Plan. -"Building adaptation to climate change in health in least developed countries through resilient WASH". 7. COP-21 “Climate and health country profile -2015 Bangladesh. Conclusion: Due to a vast coastline, low-lying land and abundance of rivers, Bangladesh is highly vulnerable to climate change. Having extensive experience with facing natural disasters, Bangladesh has developed a successful adaptation program, which led to a significant reduction in casualties from extreme weather events. In a low income country setting, Bangladesh had successfully adapted various projects and initiatives to combat future Climate Change challenges.Keywords: climate, change, success, Bangladesh
Procedia PDF Downloads 249145 Measurement and Modelling of HIV Epidemic among High Risk Groups and Migrants in Two Districts of Maharashtra, India: An Application of Forecasting Software-Spectrum
Authors: Sukhvinder Kaur, Ashok Agarwal
Abstract:
Background: For the first time in 2009, India was able to generate estimates of HIV incidence (the number of new HIV infections per year). Analysis of epidemic projections helped in revealing that the number of new annual HIV infections in India had declined by more than 50% during the last decade (GOI Ministry of Health and Family Welfare, 2010). Then, National AIDS Control Organisation (NACO) planned to scale up its efforts in generating projections through epidemiological analysis and modelling by taking recent available sources of evidence such as HIV Sentinel Surveillance (HSS), India Census data and other critical data sets. Recently, NACO generated current round of HIV estimates-2012 through globally recommended tool “Spectrum Software” and came out with the estimates for adult HIV prevalence, annual new infections, number of people living with HIV, AIDS-related deaths and treatment needs. State level prevalence and incidence projections produced were used to project consequences of the epidemic in spectrum. In presence of HIV estimates generated at state level in India by NACO, USIAD funded PIPPSE project under the leadership of NACO undertook the estimations and projections to district level using same Spectrum software. In 2011, adult HIV prevalence in one of the high prevalent States, Maharashtra was 0.42% ahead of the national average of 0.27%. Considering the heterogeneity of HIV epidemic between districts, two districts of Maharashtra – Thane and Mumbai were selected to estimate and project the number of People-Living-with-HIV/AIDS (PLHIV), HIV-prevalence among adults and annual new HIV infections till 2017. Methodology: Inputs in spectrum included demographic data from Census of India since 1980 and sample registration system, programmatic data on ‘Alive and on ART (adult and children)’,‘Mother-Baby pairs under PPTCT’ and ‘High Risk Group (HRG)-size mapping estimates’, surveillance data from various rounds of HSS, National Family Health Survey–III, Integrated Biological and Behavioural Assessment and Behavioural Sentinel Surveillance. Major Findings: Assuming current programmatic interventions in these districts, an estimated decrease of 12% points in Thane and 31% points in Mumbai among new infections in HRGs and migrants is observed from 2011 by 2017. Conclusions: Project also validated decrease in HIV new infection among one of the high risk groups-FSWs using program cohort data since 2012 to 2016. Though there is a decrease in HIV prevalence and new infections in Thane and Mumbai, further decrease is possible if appropriate programme response, strategies and interventions are envisaged for specific target groups based on this evidence. Moreover, evidence need to be validated by other estimation/modelling techniques; and evidence can be generated for other districts of the state, where HIV prevalence is high and reliable data sources are available, to understand the epidemic within the local context.Keywords: HIV sentinel surveillance, high risk groups, projections, new infections
Procedia PDF Downloads 211144 Review of Health Disparities in Migrants Attending the Emergency Department with Acute Mental Health Presentations
Authors: Jacqueline Eleonora Ek, Michael Spiteri, Chris Giordimaina, Pierre Agius
Abstract:
Background: Malta is known for being a key player as a frontline country with regard to irregular immigration from Africa to Europe. Every year the island experiences an influx of migrants as boat movement across the Mediterranean continues to be a humanitarian challenge. Irregular immigration and applying for asylum is both a lengthy and mentally demanding process. Those doing so are often faced with multiple challenges, which can adversely affect their mental health. Between January and August 2020, Malta disembarked 2 162 people rescued at sea, 463 of them between July & August. Given the small size of the Maltese islands, this regulation places a disproportionately large burden on the country, creating a backlog in the processing of asylum applications resulting in increased time periods of detention. These delays reverberate throughout multiple management pathways resulting in prolonged periods of detention and challenging access to health services. Objectives: To better understand the spatial dimensions of this humanitarian crisis, this study aims to assess disparities in the acute medical management of migrants presenting to the emergency department (ED) with acute mental health presentations as compared to that of local and non-local residents. Method: In this retrospective study, 17795 consecutive ED attendances were reviewed to look for acute mental health presentations. These were further evaluated to assess discrepancies in transportation routes to hospital, nature of presenting complaint, effects of language barriers, use of CT brain, treatment given at ED, availability of psychiatric reviews, and final admission/discharge plans. Results: Of the ED attendances, 92.3% were local residents, and 7.7% were non-locals. Of the non-locals, 13.8% were migrants, and 86.2% were other-non-locals. Acute mental health presentations were seen in 1% of local residents; this increased to 20.6% in migrants. 56.4% of migrants attended with deliberate self-harm; this was lower in local residents, 28.9%. Contrastingly, in local residents, the most common presenting complaint was suicidal thought/ low mood 37.3%, the incidence was similar in migrants at 33.3%. The main differences included 12.8% of migrants presenting with refused oral intake while only 0.6% of local residents presented with the same complaints. 7.7% of migrants presented with a reduced level of consciousness, no local residents presented with this same issue. Physicians documented a language barrier in 74.4% of migrants. 25.6% were noted to be completely uncommunicative. Further investigations included the use of a CT scan in 12% of local residents and in 35.9% of migrants. The most common treatment administered to migrants was supportive fluids 15.4%, the most common in local residents was benzodiazepines 15.1%. Voluntary psychiatric admissions were seen in 33.3% of migrants and 24.7% of locals. Involuntary admissions were seen in 23% of migrants and 13.3% of locals. Conclusion: Results showed multiple disparities in health management. A meeting was held between entities responsible for migrant health in Malta, including the emergency department, primary health care, migrant detention services, and Malta Red Cross. Currently, national quality-improvement initiatives are underway to form new pathways to improve patient-centered care. These include an interpreter unit, centralized handover sheets, and a dedicated migrant health service.Keywords: emergency department, communication, health, migration
Procedia PDF Downloads 114143 Contribution to the Understanding of the Hydrodynamic Behaviour of Aquifers of the Taoudéni Sedimentary Basin (South-eastern Part, Burkina Faso)
Authors: Kutangila Malundama Succes, Koita Mahamadou
Abstract:
In the context of climate change and demographic pressure, groundwater has emerged as an essential and strategic resource whose sustainability relies on good management. The accuracy and relevance of decisions made in managing these resources depend on the availability and quality of scientific information they must rely on. It is, therefore, more urgent to improve the state of knowledge on groundwater to ensure sustainable management. This study is conducted for the particular case of the aquifers of the transboundary sedimentary basin of Taoudéni in its Burkinabe part. Indeed, Burkina Faso (and the Sahel region in general), marked by low rainfall, has experienced episodes of severe drought, which have justified the use of groundwater as the primary source of water supply. This study aims to improve knowledge of the hydrogeology of this area to achieve sustainable management of transboundary groundwater resources. The methodological approach first described lithological units regarding the extension and succession of different layers. Secondly, the hydrodynamic behavior of these units was studied through the analysis of spatio-temporal variations of piezometric. The data consists of 692 static level measurement points and 8 observation wells located in the usual manner in the area and capturing five of the identified geological formations. Monthly piezometric level chronicles are available for each observation and cover the period from 1989 to 2020. The temporal analysis of piezometric, carried out in comparison with rainfall chronicles, revealed a general upward trend in piezometric levels throughout the basin. The reaction of the groundwater generally occurs with a delay of 1 to 2 months relative to the flow of the rainy season. Indeed, the peaks of the piezometric level generally occur between September and October in reaction to the rainfall peaks between July and August. Low groundwater levels are observed between May and July. This relatively slow reaction of the aquifer is observed in all wells. The influence of the geological nature through the structure and hydrodynamic properties of the layers was deduced. The spatial analysis reveals that piezometric contours vary between 166 and 633 m with a trend indicating flow that generally goes from southwest to northeast, with the feeding areas located towards the southwest and northwest. There is a quasi-concordance between the hydrogeological basins and the overlying hydrological basins, as well as a bimodal flow with a component following the topography and another significant component deeper, controlled by the regional gradient SW-NE. This latter component may present flows directed from the high reliefs towards the sources of Nasso. In the source area (Kou basin), the maximum average stock variation, calculated by the Water Table Fluctuation (WTF) method, varies between 35 and 48.70 mm per year for 2012-2014.Keywords: hydrodynamic behaviour, taoudeni basin, piezometry, water table fluctuation
Procedia PDF Downloads 65142 Symbiotic Functioning, Photosynthetic Induction and Characterisation of Rhizobia Associated with Groundnut, Jack Bean and Soybean from Eswatini
Authors: Zanele D. Ngwenya, Mustapha Mohammed, Felix D. Dakora
Abstract:
Legumes are a major source of biological nitrogen, and therefore play a crucial role in maintaining soil productivity in smallholder agriculture in southern Africa. Through their ability to fix atmospheric nitrogen in root nodules, legumes are a better option for sustainable nitrogen supply in cropping systems than chemical fertilisers. For decades, farmers have been highly receptive to the use of rhizobial inoculants as a source of nitrogen due mainly to the availability of elite rhizobial strains at a much lower compared to chemical fertilisers. To improve the efficiency of the legume-rhizobia symbiosis in African soils would require the use of highly effective rhizobia capable of nodulating a wide range of host plants. This study assessed the morphogenetic diversity, photosynthetic functioning and relative symbiotic effectiveness (RSE) of groundnut, jack bean and soybean microsymbionts in Eswatini soils as a first step to identifying superior isolates for inoculant production. According to the manufacturer's instructions, rhizobial isolates were cultured in yeast-mannitol (YM) broth until the late log phase and the bacterial genomic DNA was extracted using GenElute bacterial genomic DNA kit. The extracted DNA was subjected to enterobacterial repetitive intergenic consensus-PCR (ERIC-PCR) and a dendrogram constructed from the band patterns to assess rhizobial diversity. To assess the N2-fixing efficiency of the authenticated rhizobia, photosynthetic rates (A), stomatal conductance (gs), and transpiration rates (E) were measured at flowering for plants inoculated with the test isolates. The plants were then harvested for nodulation assessment and measurement of plant growth as shoot biomass. The results of ERIC-PCR fingerprinting revealed the presence of high genetic diversity among the microsymbionts nodulating each of the three test legumes, with many of them showing less than 70% ERIC-PCR relatedness. The dendrogram generated from ERIC-PCR profiles grouped the groundnut isolates into 5 major clusters, while the jack bean and soybean isolates were grouped into 6 and 7 major clusters, respectively. Furthermore, the isolates also elicited variable nodule number per plant, nodule dry matter, shoot biomass and photosynthetic rates in their respective host plants under glasshouse conditions. Of the groundnut isolates tested, 38% recorded high relative symbiotic effectiveness (RSE >80), while 55% of the jack bean isolates and 93% of the soybean isolates recorded high RSE (>80) compared to the commercial Bradyrhizobium strains. About 13%, 27% and 83% of the top N₂-fixing groundnut, jack bean and soybean isolates, respectively, elicited much higher relative symbiotic efficiency (RSE) than the commercial strain, suggesting their potential for use in inoculant production after field testing. There was a tendency for both low and high N₂-fixing isolates to group together in the dendrogram from ERIC-PCR profiles, which suggests that RSE can differ significantly among closely related microsymbionts.Keywords: genetic diversity, relative symbiotic effectiveness, inoculant, N₂-fixing
Procedia PDF Downloads 221141 Social Vulnerability Mapping in New York City to Discuss Current Adaptation Practice
Authors: Diana Reckien
Abstract:
Vulnerability assessments are increasingly used to support policy-making in complex environments, like urban areas. Usually, vulnerability studies include the construction of aggregate (sub-) indices and the subsequent mapping of indices across an area of interest. Vulnerability studies show a couple of advantages: they are great communication tools, can inform a wider general debate about environmental issues, and can help allocating and efficiently targeting scarce resources for adaptation policy and planning. However, they also have a number of challenges: Vulnerability assessments are constructed on the basis of a wide range of methodologies and there is no single framework or methodology that has proven to serve best in certain environments, indicators vary highly according to the spatial scale used, different variables and metrics produce different results, and aggregate or composite vulnerability indicators that are mapped easily distort or bias the picture of vulnerability as they hide the underlying causes of vulnerability and level out conflicting reasons of vulnerability in space. So, there is urgent need to further develop the methodology of vulnerability studies towards a common framework, which is one reason of the paper. We introduce a social vulnerability approach, which is compared with other approaches of bio-physical or sectoral vulnerability studies relatively developed in terms of a common methodology for index construction, guidelines for mapping, assessment of sensitivity, and verification of variables. Two approaches are commonly pursued in the literature. The first one is an additive approach, in which all potentially influential variables are weighted according to their importance for the vulnerability aspect, and then added to form a composite vulnerability index per unit area. The second approach includes variable reduction, mostly Principal Component Analysis (PCA) that reduces the number of variables that are interrelated into a smaller number of less correlating components, which are also added to form a composite index. We test these two approaches of constructing indices on the area of New York City as well as two different metrics of variables used as input and compare the outcome for the 5 boroughs of NY. Our analysis yields that the mapping exercise yields particularly different results in the outer regions and parts of the boroughs, such as Outer Queens and Staten Island. However, some of these parts, particularly the coastal areas receive the highest attention in the current adaptation policy. We imply from this that the current adaptation policy and practice in NY might need to be discussed, as these outer urban areas show relatively low social vulnerability as compared with the more central parts, i.e. the high dense areas of Manhattan, Central Brooklyn, Central Queens and the Southern Bronx. The inner urban parts receive lesser adaptation attention, but bear a higher risk of damage in case of hazards in those areas. This is conceivable, e.g., during large heatwaves, which would more affect more the inner and poorer parts of the city as compared with the outer urban areas. In light of the recent planning practice of NY one needs to question and discuss who in NY makes adaptation policy for whom, but the presented analyses points towards an under representation of the needs of the socially vulnerable population, such as the poor, the elderly, and ethnic minorities, in the current adaptation practice in New York City.Keywords: vulnerability mapping, social vulnerability, additive approach, Principal Component Analysis (PCA), New York City, United States, adaptation, social sensitivity
Procedia PDF Downloads 395140 Methodology to Achieve Non-Cooperative Target Identification Using High Resolution Range Profiles
Authors: Olga Hernán-Vega, Patricia López-Rodríguez, David Escot-Bocanegra, Raúl Fernández-Recio, Ignacio Bravo
Abstract:
Non-Cooperative Target Identification has become a key research domain in the Defense industry since it provides the ability to recognize targets at long distance and under any weather condition. High Resolution Range Profiles, one-dimensional radar images where the reflectivity of a target is projected onto the radar line of sight, are widely used for identification of flying targets. According to that, to face this problem, an approach to Non-Cooperative Target Identification based on the exploitation of Singular Value Decomposition to a matrix of range profiles is presented. Target Identification based on one-dimensional radar images compares a collection of profiles of a given target, namely test set, with the profiles included in a pre-loaded database, namely training set. The classification is improved by using Singular Value Decomposition since it allows to model each aircraft as a subspace and to accomplish recognition in a transformed domain where the main features are easier to extract hence, reducing unwanted information such as noise. Singular Value Decomposition permits to define a signal subspace which contain the highest percentage of the energy, and a noise subspace which will be discarded. This way, only the valuable information of each target is used in the recognition process. The identification algorithm is based on finding the target that minimizes the angle between subspaces and takes place in a transformed domain. Two metrics, F1 and F2, based on Singular Value Decomposition are accomplished in the identification process. In the case of F2, the angle is weighted, since the top vectors set the importance in the contribution to the formation of a target signal, on the contrary F1 simply shows the evolution of the unweighted angle. In order to have a wide database or radar signatures and evaluate the performance, range profiles are obtained through numerical simulation of seven civil aircraft at defined trajectories taken from an actual measurement. Taking into account the nature of the datasets, the main drawback of using simulated profiles instead of actual measured profiles is that the former implies an ideal identification scenario, since measured profiles suffer from noise, clutter and other unwanted information and simulated profiles don't. In this case, the test and training samples have similar nature and usually a similar high signal-to-noise ratio, so as to assess the feasibility of the approach, the addition of noise has been considered before the creation of the test set. The identification results applying the unweighted and weighted metrics are analysed for demonstrating which algorithm provides the best robustness against noise in an actual possible scenario. So as to confirm the validity of the methodology, identification experiments of profiles coming from electromagnetic simulations are conducted, revealing promising results. Considering the dissimilarities between the test and training sets when noise is added, the recognition performance has been improved when weighting is applied. Future experiments with larger sets are expected to be conducted with the aim of finally using actual profiles as test sets in a real hostile situation.Keywords: HRRP, NCTI, simulated/synthetic database, SVD
Procedia PDF Downloads 354139 Ethnic Identity as an Asset: Linking Ethnic Identity, Perceived Social Support, and Mental Health among Indigenous Adults in Taiwan
Authors: A.H.Y. Lai, C. Teyra
Abstract:
In Taiwan, there are 16 official indigenous groups, accounting for 2.3% of the total population. Like other indigenous populations worldwide, indigenous peoples in Taiwan have poorer mental health because of their history of oppression and colonisation. Amid the negative narratives, the ethnic identity of cultural minorities is their unique psychological and cultural asset. Moreover, positive socialisation is found to be related to strong ethnic identity. Based on Phinney’s theory on ethnic identity development and social support theory, this study adopted a strength-based approach conceptualising ethnic identity as the central organising principle that linked perceived social support and mental health among indigenous adults in Taiwan. Aims. Overall aim is to examine the effect of ethnic identity and social support on mental health. Specific aims were to examine : (1) the association between ethnic identity and mental health; (2) the association between perceived social support and mental health ; (3) the indirect effect of ethnic identity linking perceived social support and mental health. Methods. Participants were indigenous adults in Taiwan (n=200; mean age=29.51; Female=31%, Male=61%, Others=8%). A cross-sectional quantitative design was implemented using data collected in the year 2020. Respondent-driven sampling was used. Standardised measurements were: Ethnic Identity Scale(6-item); Social Support Questionnaire-SF(6 items); Patient Health Questionnaire(9-item); and Generalised Anxiety Disorder(7-item). Covariates were age, gender and economic satisfaction. A four-stage structural equation modelling (SEM) with robust maximin likelihood estimation was employed using Mplus8.0. Step 1: A measurement model was built and tested using confirmatory factor analysis (CFA). Step 2: Factor covariates were re-specified as direct effects in the SEM. Covariates were added. The direct effects of (1) ethnic identity and social support on depression and anxiety and (2) social support on ethnic identity were tested. The indirect effect of ethnic identity was examined with the bootstrapping technique. Results. The CFA model showed satisfactory fit statistics: x^2(df)=869.69(608), p<.05; Comparative ft index (CFI)/ Tucker-Lewis fit index (TLI)=0.95/0.94; root mean square error of approximation (RMSEA)=0.05; Standardized Root Mean Squared Residual (SRMR)=0.05. Ethnic identity is represented by two latent factors: ethnic identity-commitment and ethnic identity-exploration. Depression, anxiety and social support are single-factor latent variables. For the SEM, model fit statistics were: x^2(df)=779.26(527), p<.05; CFI/TLI=0.94/0.93; RMSEA=0.05; SRMR=0.05. Ethnic identity-commitment (b=-0.30) and social support (b=-0.33) had direct negative effects on depression, but ethnic identity-exploration did not. Ethnic identity-commitment (b=-0.43) and social support (b=-0.31) had direct negative effects on anxiety, while identity-exploration (b=0.24) demonstrated a positive effect. Social support had direct positive effects on ethnic identity-exploration (b=0.26) and ethnic identity-commitment (b=0.31). Mediation analysis demonstrated the indirect effect of ethnic identity-commitment linking social support and depression (b=0.22). Implications: Results underscore the role of social support in preventing depression via ethnic identity commitment among indigenous adults in Taiwan. Adopting the strength-based approach, mental health practitioners can mobilise indigenous peoples’ commitment to their group to promote their well-being.Keywords: ethnic identity, indigenous population, mental health, perceived social support
Procedia PDF Downloads 103138 Valuing Social Sustainability in Agriculture: An Approach Based on Social Outputs’ Shadow Prices
Authors: Amer Ait Sidhoum
Abstract:
Interest in sustainability has gained ground among practitioners, academics and policy-makers due to growing stakeholders’ awareness of environmental and social concerns. This is particularly true for agriculture. However, relatively little research has been conducted on the quantification of social sustainability and the contribution of social issues to the agricultural production efficiency. This research's main objective is to propose a method for evaluating prices of social outputs, more precisely shadow prices, by allowing for the stochastic nature of agricultural production that is to say for production uncertainty. In this article, the assessment of social outputs’ shadow prices is conducted within the methodological framework of nonparametric Data Envelopment Analysis (DEA). An output-oriented directional distance function (DDF) is implemented to represent the technology of a sample of Catalan arable crop farms and derive the efficiency scores the overall production technology of our sample is assumed to be the intersection of two different sub-technologies. The first sub-technology models the production of random desirable agricultural outputs, while the second sub-technology reflects the social outcomes from agricultural activities. Once a nonparametric production technology has been represented, the DDF primal approach can be used for efficiency measurement, while shadow prices are drawn from the dual representation of the DDF. Computing shadow prices is a method to assign an economic value to non-marketed social outcomes. Our research uses cross sectional, farm-level data collected in 2015 from a sample of 180 Catalan arable crop farms specialized in the production of cereals, oilseeds and protein (COP) crops. Our results suggest that our sample farms show high performance scores, from 85% for the bad state of nature to 88% for the normal and ideal crop growing conditions. This suggests that farm performance is increasing with an improvement in crop growth conditions. Results also show that average shadow prices of desirable state-contingent output and social outcomes for efficient and inefficient farms are positive, suggesting that the production of desirable marketable outputs and of non-marketable outputs makes a positive contribution to the farm production efficiency. Results also indicate that social outputs’ shadow prices are contingent upon the growing conditions. The shadow prices follow an upward trend as crop-growing conditions improve. This finding suggests that these efficient farms prefer to allocate more resources in the production of desirable outputs than of social outcomes. To our knowledge, this study represents the first attempt to compute shadow prices of social outcomes while accounting for the stochastic nature of the production technology. Our findings suggest that the decision-making process of the efficient farms in dealing with social issues are stochastic and strongly dependent on the growth conditions. This implies that policy-makers should adjust their instruments according to the stochastic environmental conditions. An optimal redistribution of rural development support, by increasing the public payment with the improvement in crop growth conditions, would likely enhance the effectiveness of public policies.Keywords: data envelopment analysis, shadow prices, social sustainability, sustainable farming
Procedia PDF Downloads 126137 Chiral Molecule Detection via Optical Rectification in Spin-Momentum Locking
Authors: Jessie Rapoza, Petr Moroshkin, Jimmy Xu
Abstract:
Chirality is omnipresent, in nature, in life, and in the field of physics. One intriguing example is the homochirality that has remained a great secret of life. Another is the pairs of mirror-image molecules – enantiomers. They are identical in atomic composition and therefore indistinguishable in the scalar physical properties. Yet, they can be either therapeutic or toxic, depending on their chirality. Recent studies suggest a potential link between abnormal levels of certain D-amino acids and some serious health impairments, including schizophrenia, amyotrophic lateral sclerosis, and potentially cancer. Although indistinguishable in their scalar properties, the chirality of a molecule reveals itself in interaction with the surrounding of a certain chirality, or more generally, a broken mirror-symmetry. In this work, we report on a system for chiral molecule detection, in which the mirror-symmetry is doubly broken, first by asymmetric structuring a nanopatterned plasmonic surface than by the incidence of circularly polarized light (CPL). In this system, the incident circularly-polarized light induces a surface plasmon polariton (SPP) wave, propagating along the asymmetric plasmonic surface. This SPP field itself is chiral, evanescently bound to a near-field zone on the surface (~10nm thick), but with an amplitude greatly intensified (by up to 104) over that of the incident light. It hence probes just the molecules on the surface instead of those in the volume. In coupling to molecules along its path on the surface, the chiral SPP wave favors one chirality over the other, allowing for chirality detection via the change in an optical rectification current measured at the edges of the sample. The asymmetrically structured surface converts the high-frequency electron plasmonic-oscillations in the SPP wave into a net DC drift current that can be measured at the edge of the sample via the mechanism of optical rectification. The measured results validate these design concepts and principles. The observed optical rectification current exhibits a clear differentiation between a pair of enantiomers. Experiments were performed by focusing a 1064nm CW laser light at the sample - a gold grating microchip submerged in an approximately 1.82M solution of either L-arabinose or D-arabinose and water. A measurement of the current output was then recorded under both rights and left circularly polarized lights. Measurements were recorded at various angles of incidence to optimize the coupling between the spin-momentums of the incident light and that of the SPP, that is, spin-momentum locking. In order to suppress the background, the values of the photocurrent for the right CPL are subtracted from those for the left CPL. Comparison between the two arabinose enantiomers reveals a preferential signal response of one enantiomer to left CPL and the other enantiomer to right CPL. In sum, this work reports on the first experimental evidence of the feasibility of chiral molecule detection via optical rectification in a metal meta-grating. This nanoscale interfaced electrical detection technology is advantageous over other detection methods due to its size, cost, ease of use, and integration ability with read-out electronic circuits for data processing and interpretation.Keywords: Chirality, detection, molecule, spin
Procedia PDF Downloads 92136 Predicting and Obtaining New Solvates of Curcumin, Demethoxycurcumin and Bisdemethoxycurcumin Based on the Ccdc Statistical Tools and Hansen Solubility Parameters
Authors: J. Ticona Chambi, E. A. De Almeida, C. A. Andrade Raymundo Gaiotto, A. M. Do Espírito Santo, L. Infantes, S. L. Cuffini
Abstract:
The solubility of active pharmaceutical ingredients (APIs) is challenging for the pharmaceutical industry. The new multicomponent crystalline forms as cocrystal and solvates present an opportunity to improve the solubility of APIs. Commonly, the procedure to obtain multicomponent crystalline forms of a drug starts by screening the drug molecule with the different coformers/solvents. However, it is necessary to develop methods to obtain multicomponent forms in an efficient way and with the least possible environmental impact. The Hansen Solubility Parameters (HSPs) is considered a tool to obtain theoretical knowledge of the solubility of the target compound in the chosen solvent. H-Bond Propensity (HBP), Molecular Complementarity (MC), Coordination Values (CV) are tools used for statistical prediction of cocrystals developed by the Cambridge Crystallographic Data Center (CCDC). The HSPs and the CCDC tools are based on inter- and intra-molecular interactions. The curcumin (Cur), target molecule, is commonly used as an anti‐inflammatory. The demethoxycurcumin (Demcur) and bisdemethoxycurcumin (Bisdcur) are natural analogues of Cur from turmeric. Those target molecules have differences in their solubilities. In this way, the work aimed to analyze and compare different tools for multicomponent forms prediction (solvates) of Cur, Demcur and Biscur. The HSP values were calculated for Cur, Demcur, and Biscur using the chemical group contribution methods and the statistical optimization from experimental data. The HSPmol software was used. From the HSPs of the target molecules and fifty solvents (listed in the HSP books), the relative energy difference (RED) was determined. The probability of the target molecules would be interacting with the solvent molecule was determined using the CCDC tools. A dataset of fifty molecules of different organic solvents was ranked for each prediction method and by a consensus ranking of different combinations: HSP, CV, HBP and MC values. Based on the prediction, 15 solvents were selected as Dimethyl Sulfoxide (DMSO), Tetrahydrofuran (THF), Acetonitrile (ACN), 1,4-Dioxane (DOX) and others. In a starting analysis, the slow evaporation technique from 50°C at room temperature and 4°C was used to obtain solvates. The single crystals were collected by using a Bruker D8 Venture diffractometer, detector Photon100. The data processing and crystal structure determination were performed using APEX3 and Olex2-1.5 software. According to the results, the HSPs (theoretical and optimized) and the Hansen solubility sphere for Cur, Demcur and Biscur were obtained. With respect to prediction analyses, a way to evaluate the predicting method was through the ranking and the consensus ranking position of solvates already reported in the literature. It was observed that the combination of HSP-CV obtained the best results when compared to the other methods. Furthermore, as a result of solvent selected, six new solvates, Cur-DOX, Cur-DMSO, Bicur-DOX, Bircur-THF, Demcur-DOX, Demcur-ACN and a new Biscur hydrate, were obtained. Crystal structures were determined for Cur-DOX, Biscur-DOX, Demcur-DOX and Bicur-Water. Moreover, the unit-cell parameter information for Cur-DMSO, Biscur-THF and Demcur-ACN were obtained. The preliminary results showed that the prediction method is showing a promising strategy to evaluate the possibility of forming multicomponent. It is currently working on obtaining multicomponent single crystals.Keywords: curcumin, HSPs, prediction, solvates, solubility
Procedia PDF Downloads 63