Search results for: distributed optical strain sensing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6024

Search results for: distributed optical strain sensing

84 Petrogeochemistry of Hornblende-Bearing Gabbro Intrusive, the Greater Caucasus

Authors: Giorgi Chichinadze, David Shengelia, Tamara Tsutsunava, Nikoloz Maisuradze, Giorgi Beridze

Abstract:

The Jalovchat gabbro intrusive is exposed on the northern and southern slopes of Main Range zone of the Greater Caucasus, on an area about 25km2. It is intruded in Precambrian crystalline schists and amphibolites intensively metamorphose them along the contact zone. The intrusive is represented by hornblende-bearing gabbro, gabbro-norites and norites including thin vein bodies of gabbro-pegmatites, anorthosites and micro-gabbros. Especially should be noted the veins of gabbro-pegmatites with the gigantic (up to 0.5m) hornblende crystals. From this point of view, the Jalovchat gabbroid intrusive is particularly interesting and by its unusual composition has no analog in the Caucasus overall. The comprehensive petrologic and geochemical study of the intrusive was carried out by the authors. The results of investigations are following. Amphiboles correspond to magnesiohastingsite and magnesiohornblende. In hastingsite and hornblende as a result of isovalent isomorphism of Fe2+ by Mg, content of the latter has been increased. By AMF and Na20+K diagrams the intrusive rocks correspond to tholeiitic basalts or to basalts close to it by composition. According to ACM-AMF double diagram the samples distributed in the fields of MORB and alkali cumulates. In TiO2/FeO+Fe2O3, Zr/Y-Zr and Ti-Cr/Ni diagrams and Ti-Cr-Y triangular diagram samples are arranged in the fields of island-arc and mid-oceanic basalts or along the trends reflecting mid-oceanic ridges or island arcs. K2O/TiO2 diagram shows that these rocks belong to normal and enriched MORB type. According to Th/Nb/Y ratio, the Jalovchat intrusive composition corresponds to depleted mantle, but by Sm/Y-Ce/Sm - to the MORB area. Th/Y and Nb/Y ratios coincide with the MORB composition, Th/Yb-Ta/Yb and La/Nb-Ti ratios correspond to N MORB, and Rb/Y and N/Y - to the lower crust formations. Exceptional are Ce/Pb-Ce and Nb/Th-Nb diagrams, showing the area of primitive mantle. Spidergrams are characterized by almost horizontal trend, weakly expressed Eu minimums and by a slight depletion of light REE. Similar are characteristic of typical tholeiit basalts. In comparison to MORB spidergrams, they are characterized by depletion of light REE. Their correlation to the spidergrams of Jalovchat intrusive proves that they are more depleted. The above cited points to the gradual depletion of mantle with the light REE in geological time. The RE and REE diagrams reveal unexpected regularity. In particular, petro-geochemical characteristics of Jalovchat gabbroid intrusive predominantly correspond to MORB, that usually is an anomalous phenomenon, since in ‘ophiolitic’ section magmatic formations represented mainly by gigantic prismatic hornblende-bearing gabbro and gabbro-pegmatite are not indicated. On the basis of petro-mineralogical and petro-geochemical data analysis, the authors consider that the Jalovchat intrusive belongs to the subduction geodynamic type. In the depleted mantle rich in water the MORB rock system has subducted, where the favorable conditions for crystallization of hornblende and especially for its gigantic crystals occurred. It is considered that the Jalovchat intrusive was formed in deep horizons of the Earth’s crust as a result of crystallization of water-bearing Bajocian basalt magma.

Keywords: The Greater Caucasus, gabbro-pegmatite, hornblende-bearing gabbro, petrogenesis

Procedia PDF Downloads 426
83 Modeling and Simulation of the Structural, Electronic and Magnetic Properties of Fe-Ni Based Nanoalloys

Authors: Ece A. Irmak, Amdulla O. Mekhrabov, M. Vedat Akdeniz

Abstract:

There is a growing interest in the modeling and simulation of magnetic nanoalloys by various computational methods. Magnetic crystalline/amorphous nanoparticles (NP) are interesting materials from both the applied and fundamental points of view, as their properties differ from those of bulk materials and are essential for advanced applications such as high-performance permanent magnets, high-density magnetic recording media, drug carriers, sensors in biomedical technology, etc. As an important magnetic material, Fe-Ni based nanoalloys have promising applications in the chemical industry (catalysis, battery), aerospace and stealth industry (radar absorbing material, jet engine alloys), magnetic biomedical applications (drug delivery, magnetic resonance imaging, biosensor) and computer hardware industry (data storage). The physical and chemical properties of the nanoalloys depend not only on the particle or crystallite size but also on composition and atomic ordering. Therefore, computer modeling is an essential tool to predict structural, electronic, magnetic and optical behavior at atomistic levels and consequently reduce the time for designing and development of new materials with novel/enhanced properties. Although first-principles quantum mechanical methods provide the most accurate results, they require huge computational effort to solve the Schrodinger equation for only a few tens of atoms. On the other hand, molecular dynamics method with appropriate empirical or semi-empirical inter-atomic potentials can give accurate results for the static and dynamic properties of larger systems in a short span of time. In this study, structural evolutions, magnetic and electronic properties of Fe-Ni based nanoalloys have been studied by using molecular dynamics (MD) method in Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) and Density Functional Theory (DFT) in the Vienna Ab initio Simulation Package (VASP). The effects of particle size (in 2-10 nm particle size range) and temperature (300-1500 K) on stability and structural evolutions of amorphous and crystalline Fe-Ni bulk/nanoalloys have been investigated by combining molecular dynamic (MD) simulation method with Embedded Atom Model (EAM). EAM is applicable for the Fe-Ni based bimetallic systems because it considers both the pairwise interatomic interaction potentials and electron densities. Structural evolution of Fe-Ni bulk and nanoparticles (NPs) have been studied by calculation of radial distribution functions (RDF), interatomic distances, coordination number, core-to-surface concentration profiles as well as Voronoi analysis and surface energy dependences on temperature and particle size. Moreover, spin-polarized DFT calculations were performed by using a plane-wave basis set with generalized gradient approximation (GGA) exchange and correlation effects in the VASP-MedeA package to predict magnetic and electronic properties of the Fe-Ni based alloys in bulk and nanostructured phases. The result of theoretical modeling and simulations for the structural evolutions, magnetic and electronic properties of Fe-Ni based nanostructured alloys were compared with experimental and other theoretical results published in the literature.

Keywords: density functional theory, embedded atom model, Fe-Ni systems, molecular dynamics, nanoalloys

Procedia PDF Downloads 212
82 Learning from Dendrites: Improving the Point Neuron Model

Authors: Alexander Vandesompele, Joni Dambre

Abstract:

The diversity in dendritic arborization, as first illustrated by Santiago Ramon y Cajal, has always suggested a role for dendrites in the functionality of neurons. In the past decades, thanks to new recording techniques and optical stimulation methods, it has become clear that dendrites are not merely passive electrical components. They are observed to integrate inputs in a non-linear fashion and actively participate in computations. Regardless, in simulations of neural networks dendritic structure and functionality are often overlooked. Especially in a machine learning context, when designing artificial neural networks, point neuron models such as the leaky-integrate-and-fire (LIF) model are dominant. These models mimic the integration of inputs at the neuron soma, and ignore the existence of dendrites. In this work, the LIF point neuron model is extended with a simple form of dendritic computation. This gives the LIF neuron increased capacity to discriminate spatiotemporal input sequences, a dendritic functionality as observed in another study. Simulations of the spiking neurons are performed using the Bindsnet framework. In the common LIF model, incoming synapses are independent. Here, we introduce a dependency between incoming synapses such that the post-synaptic impact of a spike is not only determined by the weight of the synapse, but also by the activity of other synapses. This is a form of short term plasticity where synapses are potentiated or depressed by the preceding activity of neighbouring synapses. This is a straightforward way to prevent inputs from simply summing linearly at the soma. To implement this, each pair of synapses on a neuron is assigned a variable,representing the synaptic relation. This variable determines the magnitude ofthe short term plasticity. These variables can be chosen randomly or, more interestingly, can be learned using a form of Hebbian learning. We use Spike-Time-Dependent-Plasticity (STDP), commonly used to learn synaptic strength magnitudes. If all neurons in a layer receive the same input, they tend to learn the same through STDP. Adding inhibitory connections between the neurons creates a winner-take-all (WTA) network. This causes the different neurons to learn different input sequences. To illustrate the impact of the proposed dendritic mechanism, even without learning, we attach five input neurons to two output neurons. One output neuron isa regular LIF neuron, the other output neuron is a LIF neuron with dendritic relationships. Then, the five input neurons are allowed to fire in a particular order. The membrane potentials are reset and subsequently the five input neurons are fired in the reversed order. As the regular LIF neuron linearly integrates its inputs at the soma, the membrane potential response to both sequences is similar in magnitude. In the other output neuron, due to the dendritic mechanism, the membrane potential response is different for both sequences. Hence, the dendritic mechanism improves the neuron’s capacity for discriminating spa-tiotemporal sequences. Dendritic computations improve LIF neurons even if the relationships between synapses are established randomly. Ideally however, a learning rule is used to improve the dendritic relationships based on input data. It is possible to learn synaptic strength with STDP, to make a neuron more sensitive to its input. Similarly, it is possible to learn dendritic relationships with STDP, to make the neuron more sensitive to spatiotemporal input sequences. Feeding structured data to a WTA network with dendritic computation leads to a significantly higher number of discriminated input patterns. Without the dendritic computation, output neurons are less specific and may, for instance, be activated by a sequence in reverse order.

Keywords: dendritic computation, spiking neural networks, point neuron model

Procedia PDF Downloads 101
81 Liposome Loaded Polysaccharide Based Hydrogels: Promising Delayed Release Biomaterials

Authors: J. Desbrieres, M. Popa, C. Peptu, S. Bacaita

Abstract:

Because of their favorable properties (non-toxicity, biodegradability, mucoadhesivity etc.), polysaccharides were studied as biomaterials and as pharmaceutical excipients in drug formulations. These formulations may be produced in a wide variety of forms including hydrogels, hydrogel based particles (or capsules), films etc. In these formulations, the polysaccharide based materials are able to provide local delivery of loaded therapeutic agents but their delivery can be rapid and not easily time-controllable due to, particularly, the burst effect. This leads to a loss in drug efficiency and lifetime. To overcome the consequences of burst effect, systems involving liposomes incorporated into polysaccharide hydrogels may appear as a promising material in tissue engineering, regenerative medicine and drug loading systems. Liposomes are spherical self-closed structures, composed of curved lipid bilayers, which enclose part of the surrounding solvent into their structure. The simplicity of production, their biocompatibility, the size and similar composition of cells, the possibility of size adjustment for specific applications, the ability of hydrophilic or/and hydrophobic drug loading make them a revolutionary tool in nanomedicine and biomedical domain. Drug delivery systems were developed as hydrogels containing chitosan or carboxymethylcellulose (CMC) as polysaccharides and gelatin (GEL) as polypeptide, and phosphatidylcholine or phosphatidylcholine/cholesterol liposomes able to accurately control this delivery, without any burst effect. Hydrogels based on CMC were covalently crosslinked using glutaraldehyde, whereas chitosan based hydrogels were double crosslinked (ionically using sodium tripolyphosphate or sodium sulphate and covalently using glutaraldehyde). It has been proven that the liposome integrity is highly protected during the crosslinking procedure for the formation of the film network. Calcein was used as model active matter for delivery experiments. Multi-Lamellar vesicles (MLV) and Small Uni-Lamellar Vesicles (SUV) were prepared and compared. The liposomes are well distributed throughout the whole area of the film, and the vesicle distribution is equivalent (for both types of liposomes evaluated) on the film surface as well as deeper (100 microns) in the film matrix. An obvious decrease of the burst effect was observed in presence of liposomes as well as a uniform increase of calcein release that continues even at large time scales. Liposomes act as an extra barrier for calcein release. Systems containing MLVs release higher amounts of calcein compared to systems containing SUVs, although these liposomes are more stable in the matrix and diffuse with difficulty. This difference comes from the higher quantity of calcein present within the MLV in relation with their size. Modeling of release kinetics curves was performed and the release of hydrophilic drugs may be described by a multi-scale mechanism characterized by four distinct phases, each of them being characterized by a different kinetics model (Higuchi equation, Korsmeyer-Peppas model etc.). Knowledge of such models will be a very interesting tool for designing new formulations for tissue engineering, regenerative medicine and drug delivery systems.

Keywords: controlled and delayed release, hydrogels, liposomes, polysaccharides

Procedia PDF Downloads 201
80 Genetic Polymorphism and Insilico Study Epitope Block 2 MSP1 Gene of Plasmodium falciparum Isolate Endemic Jayapura

Authors: Arsyam Mawardi, Sony Suhandono, Azzania Fibriani, Fifi Fitriyah Masduki

Abstract:

Malaria is an infectious disease caused by Plasmodium sp. This disease has a high prevalence in Indonesia, especially in Jayapura. The vaccine that is currently being developed has not been effective in overcoming malaria. This is due to the high polymorphism in the Plasmodium genome especially in areas that encode Plasmodium surface proteins. Merozoite Surface Protein 1 (MSP1) Plasmodium falciparum is a surface protein that plays a role in the invasion process in human erythrocytes through the interaction of Glycophorin A protein receptors and sialic acid in erythrocytes with Reticulocyte Binding Proteins (RBP) and Duffy Adhesion Protein (DAP) ligands in merozoites. MSP1 can be targeted to be a specific antigen and predicted epitope area which will be used for the development of diagnostic and malaria vaccine therapy. MSP1 consists of 17 blocks, each block is dimorphic, and has been marked as the K1 and MAD20 alleles. Exceptions only in block 2, because it has 3 alleles, among others K1, MAD20 and RO33. These polymorphisms cause allelic variations and implicate the severity of patients infected P. falciparum. In addition, polymorphism of MSP1 in Jayapura isolates has not been reported so it is interesting to be further identified and projected as a specific antigen. Therefore, in this study, we analyzed the allele polymorphism as well as detected the MSP1 epitope antigen candidate on block 2 P. falciparum. Clinical samples of selected malaria patients followed the consecutive sampling method, examining malaria parasites with blood preparations on glass objects observed through a microscope. Plasmodium DNA was isolated from the blood of malarial positive patients. The block 2 MSP1 gene was amplified using PCR method and cloned using the pGEM-T easy vector then transformed to TOP'10 E.coli. Positive colonies selection was performed with blue-white screening. The existence of target DNA was confirmed by PCR colonies and DNA sequencing methods. Furthermore, DNA sequence analysis was done through alignment and formation of a phylogenetic tree using MEGA 6 software and insilico analysis using IEDB software to predict epitope candidate for P. falciparum. A total of 15 patient samples have been isolated from Plasmodium DNA. PCR amplification results show the target gene size about ± 1049 bp. The results of MSP1 nucleotide alignment analysis reveal that block 2 MSP1 genes derived from the sample of malarial patients were distributed in four different allele family groups, K1 (7), MAD20 (1), RO33 (0) and MSP1_Jayapura (10) alleles. The most commonly appears of the detected allele is MSP1_Jayapura single allele. There was no significant association between sex variables, age, the density of parasitemia and alel variation (Mann Whitney, U > 0.05), while symptomatic signs have a significant difference as a trigger of detectable allele variation (U < 0.05). In this research, insilico study shows that there is a new epitope antigen candidate from the MSP1_Jayapura allele and it is predicted to be recognized by B cells with 17 amino acid lengths in the amino acid sequence 187 to 203.

Keywords: epitope candidate, insilico analysis, MSP1 P. falciparum, polymorphism

Procedia PDF Downloads 158
79 Improving the Budget Distribution Procedure to Ensure Smooth and Efficient Public Service Delivery

Authors: Rizwana Tabassum

Abstract:

Introductive Statement: Delay in budget releases is often cited as one of the biggest bottlenecks to smooth and efficient service delivery. While budget release from the ministry of finance to the line ministries has been expedited by simplifying the procedure, budget distribution within the line ministries remains one of the major causes of slow budget utilization. While the budget preparation is a bottom-up process where all DDOs submit their proposals to their controlling officers (such as Upazila Civil Surgeon sends it to Director General Health), who consolidate the budget proposals in iBAS++ budget preparation module, the approved budget is not disaggregated by all DDOs. Instead, it is left to the discretion of the controlling officers to distribute the approved budget to their sub-ordinate offices over the course of the year. Though there are some need-based criteria/formulae to distribute the approved budget among DDOs in some sectors, there is little evidence that these criteria are actually used. This means that majority of the DDOs don’t know their yearly allocations upfront to enable yearly planning of activities and expenditures. This delays the implementation of critical activities and the payment to the suppliers of goods and services and sometimes leads to undocumented arrears to suppliers for essential goods/services. In addition, social sector budgets are fragmented because of the vertical programs and externally financed interventions that pose several management challenges at the level of the budget holders and frontline service providers. Slow procurement processes further delay the provision of necessary goods and services. For example, it takes an average of 15–18 months for drugs to reach the Upazila Health Complex and below, while it should not take more than 9 months in procuring and distributing these. Aim of the Study: This paper aims to investigate the budget distribution practices of an emerging economy, Bangladesh. The paper identifies challenges of timely distribution and ways to deal with problems as well. Methodology: The study draws conclusions on the basis of document analysis which is a branch of the qualitative research method. Major Findings: Upon approval of the National Budget, the Ministry of Finance is required to distribute the budget to budget holders at the department level; however, budget is distributed to drawing and disbursing officers much later. Conclusions: Timely and predictable budget releases assist completion of development schemes on time and on budget, with sufficient recurrent resources for effective operation. ADP implementation is usually very low at the beginning of the fiscal year and expedited dramatically during the last few months, leading to inefficient use of resources. The timely budget release will resolve this issue and deliver economic benefits faster, better, and more reliably. This will also give the project directors/DDOs the freedom to think and plan the budget execution in a predictable manner, thereby ensuring value for money by reducing time overrun and expediting the completion of capital investments, and improving infrastructure utilization through timely payment of recurrent costs.

Keywords: budget distribution, challenges, digitization, emerging economy, service delivery

Procedia PDF Downloads 57
78 IoT Continuous Monitoring Biochemical Oxygen Demand Wastewater Effluent Quality: Machine Learning Algorithms

Authors: Sergio Celaschi, Henrique Canavarro de Alencar, Claaudecir Biazoli

Abstract:

Effluent quality is of the highest priority for compliance with the permit limits of environmental protection agencies and ensures the protection of their local water system. Of the pollutants monitored, the biochemical oxygen demand (BOD) posed one of the greatest challenges. This work presents a solution for wastewater treatment plants - WWTP’s ability to react to different situations and meet treatment goals. Delayed BOD5 results from the lab take 7 to 8 analysis days, hindered the WWTP’s ability to react to different situations and meet treatment goals. Reducing BOD turnaround time from days to hours is our quest. Such a solution is based on a system of two BOD bioreactors associated with Digital Twin (DT) and Machine Learning (ML) methodologies via an Internet of Things (IoT) platform to monitor and control a WWTP to support decision making. DT is a virtual and dynamic replica of a production process. DT requires the ability to collect and store real-time sensor data related to the operating environment. Furthermore, it integrates and organizes the data on a digital platform and applies analytical models allowing a deeper understanding of the real process to catch sooner anomalies. In our system of continuous time monitoring of the BOD suppressed by the effluent treatment process, the DT algorithm for analyzing the data uses ML on a chemical kinetic parameterized model. The continuous BOD monitoring system, capable of providing results in a fraction of the time required by BOD5 analysis, is composed of two thermally isolated batch bioreactors. Each bioreactor contains input/output access to wastewater sample (influent and effluent), hydraulic conduction tubes, pumps, and valves for batch sample and dilution water, air supply for dissolved oxygen (DO) saturation, cooler/heater for sample thermal stability, optical ODO sensor based on fluorescence quenching, pH, ORP, temperature, and atmospheric pressure sensors, local PLC/CPU for TCP/IP data transmission interface. The dynamic BOD system monitoring range covers 2 mg/L < BOD < 2,000 mg/L. In addition to the BOD monitoring system, there are many other operational WWTP sensors. The CPU data is transmitted/received to/from the digital platform, which in turn performs analyses at periodic intervals, aiming to feed the learning process. BOD bulletins and their credibility intervals are made available in 12-hour intervals to web users. The chemical kinetics ML algorithm is composed of a coupled system of four first-order ordinary differential equations for the molar masses of DO, organic material present in the sample, biomass, and products (CO₂ and H₂O) of the reaction. This system is solved numerically linked to its initial conditions: DO (saturated) and initial products of the kinetic oxidation process; CO₂ = H₂0 = 0. The initial values for organic matter and biomass are estimated by the method of minimization of the mean square deviations. A real case of continuous monitoring of BOD wastewater effluent quality is being conducted by deploying an IoT application on a large wastewater purification system located in S. Paulo, Brazil.

Keywords: effluent treatment, biochemical oxygen demand, continuous monitoring, IoT, machine learning

Procedia PDF Downloads 48
77 Detection of Triclosan in Water Based on Nanostructured Thin Films

Authors: G. Magalhães-Mota, C. Magro, S. Sério, E. Mateus, P. A. Ribeiro, A. B. Ribeiro, M. Raposo

Abstract:

Triclosan [5-chloro-2-(2,4-dichlorophenoxy) phenol], belonging to the class of Pharmaceuticals and Personal Care Products (PPCPs), is a broad-spectrum antimicrobial agent and bactericide. Because of its antimicrobial efficacy, it is widely used in personal health and skin care products, such as soaps, detergents, hand cleansers, cosmetics, toothpastes, etc. However, it has been considered to disrupt the endocrine system, for instance, thyroid hormone homeostasis and possibly the reproductive system. Considering the widespread use of triclosan, it is expected that environmental and food safety problems regarding triclosan will increase dramatically. Triclosan has been found in river water samples in both North America and Europe and is likely widely distributed wherever triclosan-containing products are used. Although significant amounts are removed in sewage plants, considerable quantities remain in the sewage effluent, initiating widespread environmental contamination. Triclosan undergoes bioconversion to methyl-triclosan, which has been demonstrated to bio accumulate in fish. In addition, triclosan has been found in human urine samples from persons with no known industrial exposure and in significant amounts in samples of mother's milk, demonstrating its presence in humans. The action of sunlight in river water is known to turn triclosan into dioxin derivatives and raises the possibility of pharmacological dangers not envisioned when the compound was originally utilized. The aim of this work is to detect low concentrations of triclosan in an aqueous complex matrix through the use of a sensor array system, following the electronic tongue concept based on impedance spectroscopy. To achieve this goal, we selected the appropriate molecules to the sensor so that there is a high affinity for triclosan and whose sensitivity ensures the detection of concentrations of at least nano-molar. Thin films of organic molecules and oxides have been produced by the layer-by-layer (LbL) technique and sputtered onto glass solid supports already covered by gold interdigitated electrodes. By submerging the films in complex aqueous solutions with different concentrations of triclosan, resistance and capacitance values were obtained at different frequencies. The preliminary results showed that an array of interdigitated electrodes sensor coated or uncoated with different LbL and films, can be used to detect TCS traces in aqueous solutions in a wide range concentration, from 10⁻¹² to 10⁻⁶ M. The PCA method was applied to the measured data, in order to differentiate the solutions with different concentrations of TCS. Moreover, was also possible to trace a curve, the plot of the logarithm of resistance versus the logarithm of concentration, which allowed us to fit the plotted data points with a decreasing straight line with a slope of 0.022 ± 0.006 which corresponds to the best sensitivity of our sensor. To find the sensor resolution near of the smallest concentration (Cs) used, 1pM, the minimum measured value which can be measured with resolution is 0.006, so the ∆logC =0.006/0.022=0.273, and, therefore, C-Cs~0.9 pM. This leads to a sensor resolution of 0.9 pM for the smallest concentration used, 1pM. This attained detection limit is lower than the values obtained in the literature.

Keywords: triclosan, layer-by-layer, impedance spectroscopy, electronic tongue

Procedia PDF Downloads 227
76 Harnessing Nature's Fury: Hyptis Suaveolens Loaded Bioactive Liposome for Photothermal Therapy of Lung Cancer

Authors: Sajmina Khatun, Monika Pebam, Aravind Kumar Rengan

Abstract:

Photothermal therapy, a subset of nanomedicine, takes advantage of light-absorbing agents to generate localized heat, selectively eradicating cancer cells. This innovative approach minimizes damage to healthy tissues and offers a promising avenue for targeted cancer treatment. Unlike conventional therapies, photothermal therapy harnesses the power of light to combat malignancies precisely and effectively, showcasing its potential to revolutionize cancer treatment paradigms. The combined strengths of nanomedicine and photothermal therapy signify a transformative shift toward more effective, targeted, and tolerable cancer treatments in the medical landscape. Utilizing natural products becomes instrumental in formulating diverse bioactive medications owing to their various pharmacological properties attributed to the existence of phenolic structures, triterpenoids, and similar compounds. Hyptis suaveolens, commonly known as pignut, stands as an aromatic herb within the Lamiaceae family and represents a valuable therapeutic plant. Flourishing in swamps and alongside tropical and subtropical roadsides, these noxious weeds impede the development of adjacent plants. Hyptis suaveolens ranks among the most globally distributed alien invasive species. The present investigation revealed that a versatile, biodegradable liposome nanosystem (HIL NPs), incorporating bioactive molecules from Hyptis suaveolens, exhibits effective bioavailability to cancer cells, enabling tumor ablation upon near-infrared (NIR) laser exposure. The components within the nanosystem, specifically the bioactive molecules from Hyptis, function as anticancer agents, aiding in the photothermal ablation of highly metastatic lung cancer cells. Despite being a prolific weed impeding neighboring plant growth, Hyptis suaveolens showcases therapeutic benefits through its bioactive compounds. The obtained HIL NPs, characterized as a photothermally active liposome nanosystem, demonstrate a pronounced fluorescence absorption peak in the NIR range and achieve a high photothermal conversion efficiency under NIR laser irradiation. Transmission electron microscopy (TEM) and particle size analysis reveal that HIL NPs possess a spherical shape with a size of 141 ± 30 nm. Moreover, in vitro assessments of HIL NPs against lung cancer cell lines (A549) indicate effective anticancer activity through a combined cytotoxic effect and hyperthermia. Tumor ablation is facilitated by apoptosis induced by the overexpression of ɣ-H2AX, arresting cancer cell proliferation. Consequently, the multifunctional and biodegradable nanosystem (HIL NPs), incorporating bioactive compounds from Hyptis, provides valuable perspectives for developing an innovative therapeutic strategy originating from a challenging weed. This approach holds promise for potential applications in both bioimaging and the combined use of phyto-photothermal therapy for cancer treatment.

Keywords: bioactive liposome, hyptis suaveolens, photothermal therapy, lung cancer

Procedia PDF Downloads 60
75 Mechanical Transmission of Parasites by Cockroaches’ Collected from Urban Environment of Lahore, Pakistan

Authors: Hafsa Memona, Farkhanda Manzoor

Abstract:

Cockroaches are termed as medically important pests because of their wide distribution in human habitation including houses, hospitals, food industries and kitchens. They may harbor multiple drug resistant pathogenic bacteria and protozoan parasites on their external surfaces, disseminate on human food and cause serious diseases and allergies to human. Hence, they are regarded as mechanical vector in human habitation due to their nocturnal activity and nutritional behavior. Viable eggs and dormant cysts of parasites can hitch a ride on cockroaches. Ova and cysts of parasitic organism may settle into the crevices and cracks between thorax and head. There are so many fissures and clefts and crannies on a cockroach which provide site for these organisms. This study aimed with identifying role of cockroaches in mechanically transmitting and disseminating gastrointestinal parasites in two environmental settings; hospitals and houses in urban area of Lahore. Totally, 250 adult cockroaches were collected from houses and hospitals by sticky traps and food baited traps and screened for parasitic load. All cockroaches were captured during their feeding time in natural habitat. Direct wet smear, 1% lugols iodine and modified acid-fast bacilli staining were used to identify the parasites from the body surfaces of cockroaches. Among human habitation two common species of cockroaches were collected i.e. P. americana and B. germanica. The results showed that 112 (46.8%) cockroaches harbored at least one human intestinal parasite on their body surfaces. The cockroaches from hospital environment harboured more parasites than houses. 47 (33.57%) cockroaches from houses and 65 (59.09%) from hospitals were infected with parasitic organisms. Of these, 76 (67.85%) were parasitic protozoans and 36(32.15%) were pathogenic and non-pathogenic intestinal parasites. P. americana harboured more parasites as compared to B. germanica in both environment. Most common human intestinal parasites found on cockroaches include ova of Ascaris lumbricoides (giant roundworm), Trichuris trichura (whipworm), Anchylostoma deodunalae (hookworm), Enterobius vermicularis (pinworm), Taenia spp. and Strongyloides stercoralis (threadworm). The cysts of protozoans’ parasites including Balantidium coli, Entomoeba hystolitica, C. parvum, Isospora belli, Giardia duodenalis and C. cayetenensis were isolated and identified from cockroaches. Both experimental sites were significantly different in carriage of parasitic load on cockroaches. Difference in the hygienic condition of the environments, including human excrement disposal, variable habitat interacted, indoor and outdoor species, may account for the observed variation in the parasitic carriage rate of cockroaches among different experimental site. Thus a finding of this study is that Cockroaches are uniformly distributed in human habitation and act as a mechanical vector of pathogenic parasites that cause common illness such as diarrhea and bowel disorders. This fact contributes to epidemiological chain therefore control of cockroaches will significantly lessen the prevalence of illness in human. Effective control strategies will reduce the public health burden of the gastro-intestinal parasites in the developing countries.

Keywords: cockroaches, health risks, hospitals, houses, parasites, protozoans, transmission

Procedia PDF Downloads 260
74 Increasing Prevalence of Multi-Allergen Sensitivities in Patients with Allergic Rhinitis and Asthma in Eastern India

Authors: Sujoy Khan

Abstract:

There is a rising concern with increasing allergies affecting both adults and children in rural and urban India. Recent report on adults in a densely populated North Indian city showed sensitization rates for house dust mite, parthenium, and cockroach at 60%, 40% and 18.75% that is now comparable to allergy prevalence in cities in the United States. Data from patients residing in the eastern part of India is scarce. A retrospective study (over 2 years) was done on patients with allergic rhinitis and asthma where allergen-specific IgE levels were measured to see the aero-allergen sensitization pattern in a large metropolitan city of East India. Total IgE and allergen-specific IgE levels were measured using ImmunoCAP (Phadia 100, Thermo Fisher Scientific, Sweden) using region-specific aeroallergens: Dermatophagoides pteronyssinus (d1); Dermatophagoides farinae (d2); cockroach (i206); grass pollen mix (gx2) consisted of Cynodon dactylon, Lolium perenne, Phleum pratense, Poa pratensis, Sorghum halepense, Paspalum notatum; tree pollen mix (tx3) consisted of Juniperus sabinoides, Quercus alba, Ulmus americana, Populus deltoides, Prosopis juliflora; food mix 1 (fx1) consisted of Peanut, Hazel nut, Brazil nut, Almond, Coconut; mould mix (mx1) consisted of Penicillium chrysogenum, Cladosporium herbarum, Aspergillus fumigatus, Alternaria alternate; animal dander mix (ex1) consisted of cat, dog, cow and horse dander; and weed mix (wx1) consists of Ambrosia elatior, Artemisia vulgaris, Plantago lanceolata, Chenopodium album, Salsola kali, following manufacturer’s instructions. As the IgE levels were not uniformly distributed, median values were used to represent the data. 92 patients with allergic rhinitis and asthma (united airways disease) were studied over 2 years including 21 children (age < 12 years) who had total IgE and allergen-specific IgE levels measured. The median IgE level was higher in 2016 than in 2015 with 60% of patients (adults and children) being sensitized to house dust mite (dual positivity for Dermatophagoides pteronyssinus and farinae). Of 11 children in 2015, whose total IgE ranged from 16.5 to >5000 kU/L, 36% of children were polysensitized (≥4 allergens), and 55% were sensitized to dust mites. Of 10 children in 2016, total IgE levels ranged from 37.5 to 2628 kU/L, and 20% were polysensitized with 60% sensitized to dust mites. Mould sensitivity was 10% in both of the years in the children studied. A consistent finding was that ragweed sensitization (molecular homology to Parthenium hysterophorus) appeared to be increasing across all age groups, and throughout the year, as reported previously by us where 25% of patients were sensitized. In the study sample overall, sensitizations to dust mite, cockroach, and parthenium were important risks in our patients with moderate to severe asthma that reinforces the importance of controlling indoor exposure to these allergens. Sensitizations to dust mite, cockroach and parthenium allergens are important predictors of asthma morbidity not only among children but also among adults in Eastern India.

Keywords: aAeroallergens, asthma, dust mite, parthenium, rhinitis

Procedia PDF Downloads 172
73 Colloid-Based Biodetection at Aqueous Electrical Interfaces Using Fluidic Dielectrophoresis

Authors: Francesca Crivellari, Nicholas Mavrogiannis, Zachary Gagnon

Abstract:

Portable diagnostic methods have become increasingly important for a number of different purposes: point-of-care screening in developing nations, environmental contamination studies, bio/chemical warfare agent detection, and end-user use for commercial health monitoring. The cheapest and most portable methods currently available are paper-based – lateral flow and dipstick methods are widely available in drug stores for use in pregnancy detection and blood glucose monitoring. These tests are successful because they are cheap to produce, easy to use, and require minimally invasive sampling. While adequate for their intended uses, in the realm of blood-borne pathogens and numerous cancers, these paper-based methods become unreliable, as they lack the nM/pM sensitivity currently achieved by clinical diagnostic methods. Clinical diagnostics, however, utilize techniques involving surface plasmon resonance (SPR) and enzyme-linked immunosorbent assays (ELISAs), which are expensive and unfeasible in terms of portability. To develop a better, competitive biosensor, we must reduce the cost of one, or increase the sensitivity of the other. Electric fields are commonly utilized in microfluidic devices to manipulate particles, biomolecules, and cells. Applications in this area, however, are primarily limited to interfaces formed between immiscible interfaces. Miscible, liquid-liquid interfaces are common in microfluidic devices, and are easily reproduced with simple geometries. Here, we demonstrate the use of electrical fields at liquid-liquid electrical interfaces, known as fluidic dielectrophoresis, (fDEP) for biodetection in a microfluidic device. In this work, we apply an AC electric field across concurrent laminar streams with differing conductivities and permittivities to polarize the interface and induce a discernible, near-immediate, frequency-dependent interfacial tilt. We design this aqueous electrical interface, which becomes the biosensing “substrate,” to be intelligent – it “moves” only when a target of interest is present. This motion requires neither labels nor expensive electrical equipment, so the biosensor is inexpensive and portable, yet still capable of sensitive detection. Nanoparticles, due to their high surface-area-to-volume ratio, are often incorporated to enhance detection capabilities of schemes like SPR and fluorimetric assays. Most studies currently investigate binding at an immobilized solid-liquid or solid-gas interface, where particles are adsorbed onto a planar surface, functionalized with a receptor to create a reactive substrate, and subsequently flushed with a fluid or gas with the relevant analyte. These typically involve many preparation and rinsing steps, and are susceptible to surface fouling. Our microfluidic device is continuously flowing and renewing the “substrate,” and is thus not subject to fouling. In this work, we demonstrate the ability to electrokinetically detect biomolecules binding to functionalized nanoparticles at liquid-liquid interfaces using fDEP. In biotin-streptavidin experiments, we report binding detection limits on the order of 1-10 pM, without amplifying signals or concentrating samples. We also demonstrate the ability to detect this interfacial motion, and thus the presence of binding, using impedance spectroscopy, allowing this scheme to become non-optical, in addition to being label-free.

Keywords: biodetection, dielectrophoresis, microfluidics, nanoparticles

Procedia PDF Downloads 362
72 An Argument for Agile, Lean, and Hybrid Project Management in Museum Conservation Practice: A Qualitative Evaluation of the Morris Collection Conservation Project at the Sainsbury Centre for Visual Arts

Authors: Maria Ledinskaya

Abstract:

This paper is part case study and part literature review. It seeks to introduce Agile, Lean, and Hybrid project management concepts from business, software development, and manufacturing fields to museum conservation by looking at their practical application on a recent conservation project at the Sainsbury Centre for Visual Arts. The author outlines the advantages of leaner and more agile conservation practices in today’s faster, less certain, and more budget-conscious museum climate where traditional project structures are no longer as relevant or effective. The Morris Collection Conservation Project was carried out in 2019-2021 in Norwich, UK, and concerned the remedial conservation of around 150 Abstract Constructivist artworks bequeathed to the Sainsbury Centre by private collectors Michael and Joyce Morris. It was a medium-sized conservation project of moderate complexity, planned and delivered in an environment with multiple known unknowns – unresearched collection, unknown conditions and materials, unconfirmed budget. The project was later impacted by the COVID-19 pandemic, introducing indeterminate lockdowns, budget cuts, staff changes, and the need to accommodate social distancing and remote communications. The author, then a staff conservator at the Sainsbury Centre who acted as project manager on the Morris Project, presents an incremental, iterative, and value-based approach to managing a conservation project in an uncertain environment. The paper examines the project from the point of view of Traditional, Agile, Lean, and Hybrid project management. The author argues that most academic writing on project management in conservation has focussed on a Traditional plan-driven approach – also known as Waterfall project management – which has significant drawbacks in today’s museum environment due to its over-reliance on prediction-based planning and its low tolerance to change. In the last 20 years, alternative Agile, Lean and Hybrid approaches to project management have been widely adopted in software development, manufacturing, and other industries, although their recognition in the museum sector has been slow. Using examples from the Morris Project, the author introduces key principles and tools of Agile, Lean, and Hybrid project management and presents a series of arguments on the effectiveness of these alternative methodologies in museum conservation, including the ethical and practical challenges to their implementation. These project management approaches are discussed in the context of consequentialist, relativist, and utilitarian developments in contemporary conservation ethics. Although not intentionally planned as such, the Morris Project had a number of Agile and Lean features which were instrumental to its successful delivery. These key features are identified as distributed decision-making, a co-located cross-disciplinary team, servant leadership, focus on value-added work, flexible planning done in shorter sprint cycles, light documentation, and emphasis on reducing procedural, financial, and logistical waste. Overall, the author’s findings point in favour of a hybrid model, which combines traditional and alternative project processes and tools to suit the specific needs of the project.

Keywords: agile project management, conservation, hybrid project management, lean project management, waterfall project management

Procedia PDF Downloads 47
71 The Effect of Students’ Social and Scholastic Background and Environmental Impact on Shaping Their Pattern of Digital Learning in Academia: A Pre- and Post-COVID Comparative View

Authors: Nitza Davidovitch, Yael Yossel-Eisenbach

Abstract:

The purpose of the study was to inquire whether there was a change in the shaping of undergraduate students’ digitally-oriented study pattern in the pre-Covid (2016-2017) versus post-Covid period (2022-2023), as affected by three factors: social background characteristics, high school, and academic background characteristics. These two-time points were cauterized by dramatic changes in teaching and learning at institutions of higher education. The data were collected via cross-sectional surveys at two-time points, in the 2016-2017 academic school year (N=443) and in the 2022-2023 school year (N=326). The questionnaire was distributed on social media and it includes questions on demographic background characteristics, previous studies in high school and present academic studies, and questions on learning and reading habits. Method of analysis: A. Statistical descriptive analysis, B. Mean comparison tests were conducted to analyze the variations in the mean score for the digitally-oriented learning pattern variable at two-time points (pre- and post-Covid) in relation to each of the independent variables. C. Analysis of variance was performed to test the main effects and the interactions. D. Applying linear regression, the research aimed to examine the combined effect of the independent variables on shaping students' digitally-oriented learning habits. The analysis includes four models. In all four models, the dependent variable is students’ perception of digitally oriented learning. The first model included social background variables; the second model included scholastic background as well. In the third model, the academic background variables were added, and the fourth model includes all the independent variables together with the variable of period (pre- and post-COVID). E. Factor analysis confirms using the principal component method with varimax rotation; the variables were constructed by a weighted mean of all the relevant statements merged to form a single variable denoting a shared content world. The research findings indicate a significant rise in students’ perceptions of digitally-oriented learning in the post-COVID period. From a gender perspective, the impact of COVID on shaping a digital learning pattern was much more significant for female students. The socioeconomic status perspective is eliminated when controlling for the period, and the student’s job is affected - more than all other variables. It may be assumed that the student’s work pattern mediates effects related to the convenience offered by digital learning regarding distance and time. The significant effect of scholastic background on shaping students’ digital learning patterns remained stable, even when controlling for all explanatory variables. The advantage that universities had over colleges in shaping a digital learning pattern in the pre-COVID period dissipated. Therefore, it can be said that after COVID, there was a change in how colleges shape students’ digital learning patterns in such a way that no institutional differences are evident with regard to shaping the digital learning pattern. The study shows that period has a significant independent effect on shaping students’ digital learning patterns when controlling for the explanatory variables.

Keywords: learning pattern, COVID, socioeconomic status, digital learning

Procedia PDF Downloads 25
70 Pharmacognostical, Phytochemical and Biological Studies of Leaves and Stems of Hippophae Salicifolia

Authors: Bhupendra Kumar Poudel, Sadhana Amatya, Tirtha Maiya Shrestha, Bharatmani Pokhrel, Mohan Prasad Amatya

Abstract:

Background: H. salicifolia is a dense, branched, multipurpose, deciduous, nitrogen fixing, thorny willow-like small to moderate tree, restricted to the Himalaya. Among the two species of Nepal (Hippophae salicifolia and H. tibetana), it has been traditionally used as food additive, anticancer (bark), and treating toothache, tooth inflammation (anti-inflammatory) and radiation injury; while people of Western Nepal have largely undermined its veiled treasure by using it for fuel, wood and soil stabilization only. Therefore, the main objective of this study was to explore biological properties (analgesic, antidiabetic, cytotoxic and anti-inflammatory properties of this plant. Methodology: The transverse section of leaves and stems were viewed under microscope. Extracts obtained from soxhlation subjected to tests for phytochemical and biological studies. Rats (used to study antidiabetic and anti-inflammatory properties) and mice (used to study analgesic, CNS depressant, muscle relaxant and locomotor properties) were assumed to be normally distributed; then ANOVA and post hoc tukey test was used to find significance. The data obtained were analyzed by SPSS 17 and Excel 2007. Results and Conclusion: Pharmacognostical analysis revealed the presence of long stellate trichomes, double layered vascular bundle 5-6 in number and double layered compact sclerenchyma. The preliminary phytochemical screening of the extracts was found to exhibit the positive reaction tests for glycoside, steroid, tannin, flavonoid, saponin, coumarin and reducing sugar. The brine shrimp lethality bioassay tested in 1000, 100 and 10 ppm revealed cytotoxic activity inherent in methanol, water, chloroform and ethyl acetate extracts with LC50 (μg/ml) values of 61.42, 99.77, 292.72 and 277.84 respectively. The cytotoxic activity may be due to presence of tannins in the constituents. Antimicrobial screening of the extracts by cup diffusion method using Staphylococcus aereus, Escherichia coli and Pseudomonas aeruginosa against standard antibiotics (oxacillin, gentamycin and amikacin respectively) portrayed no activity against the microorganisms tested. The methanol extract of the stems and leaves showed various pharmacological properties: and antidiabetic, anti-inflammatory, analgesic [chemical writhing method], CNS depressant, muscle relaxant and locomotor activities in a dose-dependent fashion, indicating the possibility of the presence of different constituents in the stems and leaves responsible for these biological activities. All the effects when analyzed by post hoc tukey test were found to be significant at 95% confidence level. The antidiabetic activity was presumed to be due to flavonoids present in extract. Therefore, it can be concluded that this plant’s secondary metabolites possessed strong antidiabetic, anti-inflammatory and cytotoxic activity which could be isolated for further investigation.

Keywords: Hippophae salicifolia, constituents, antidiabetic, inflammatory, brine shrimp

Procedia PDF Downloads 317
69 Sedimentation and Morphology of the Kura River-Deltaic System in the Southern Caucasus under Anthropogenic and Sea-Level Controls

Authors: Elmira Aliyeva, Dadash Huseynov, Robert Hoogendoorn, Salomon Kroonenberg

Abstract:

The Kura River is the major water artery in the Southern Caucasus; it is a third river in the Caspian Sea basin in terms of length and size of the catchment area, the second in terms of the water budget, and the first in the volume of sediment load. Understanding of major controls on the Kura fluvial- deltaic system is valuable for efficient management of the highly populated river basin and coastal zone. We have studied grain size of sediments accumulated in the river channels and delta and dated by 210Pb method, astrophotographs, old topographic and geological maps, and archive data. At present time sediments are supplied by the Kura River to the Caspian Sea through three distributary channels oriented north-east, south-east, and south-west. The river is dominated by the suspended load - mud, silt, very fine sand. Coarse sediments are accumulated in the distributaries, levees, point bar, and delta front. The annual suspended sediment budget in the time period 1934-1952 before construction of the Mingechavir water reservoir in 1953 in the Kura River midstream area was 36 mln.t/yr. From 1953 to 1964, the suspended load has dropped to 12 mln.t/yr. After regulation of the Kura River discharge the volume of suspended load transported via north-eastern channel reduced from 35% of the total sediment amount to 4%, and through the main south-eastern channel increased from 65% to 96% with further fall to 56% due to creation of new south-western channel in 1964. Between 1967-1976 the annual sediment budget of the Kura River reached 22,5 mln. t/yr. From 1977 to 1986, the sediment load carried by the Kura River dropped to 17,6 mln.t/yr. The historical data show that between 1860 and 1907, during relatively stable Caspian Sea level two channels - N and SE, appear to have distributed an equal amount of sediments as seen from the bilateral geometry of the delta. In the time period 1907-1929, two new channels - E and NE, appeared. The growth of three delta lobes - N, NE, and SE, and rapid progradation of the delta has occurred on the background of the Caspian Sea level rise as a result of very high sediment supply. Since 1929 the Caspian Sea level decline was followed by the progradation of the delta occurring along the SE channel. The eastern and northern channels have been silted up. The slow rate of progradation at its initial stage was caused by the artificial reduction in the sediment budget. However, the continuous sea-level fall has brought to this river bed gradient increase, high erosional rate, increase in the sediment supply, and more rapid progradation. During the subsequent sea-level rise after 1977 accompanied by the decrease in the sediment budget, the southern part of the delta has turned into a complex of small, shallow channels oriented to the south. The data demonstrate that behaviour of the Kura fluvial – deltaic system and variations in the sediment budget besides anthropogenic regulation are strongly governed by the Caspian Sea level very rapid changes.

Keywords: anthropogenic control on sediment budget, Caspian sea-level variations, Kura river sediment load, morphology of the Kura river delta, sedimentation in the Kura river delta

Procedia PDF Downloads 129
68 Estimated Heat Production, Blood Parameters and Mitochondrial DNA Copy Number of Nellore Bulls with High and Low Residual Feed Intake

Authors: Welder A. Baldassini, Jon J. Ramsey, Marcos R. Chiaratti, Amália S. Chaves, Renata H. Branco, Sarah F. M. Bonilha, Dante P. D. Lanna

Abstract:

With increased production costs there is a need for animals that are more efficient in terms of meat production. In this context, the role of mitochondrial DNA (mtDNA) on physiological processes in liver, muscle and adipose tissues may account for inter-animal variation in energy expenditures and heat production. The purpose this study was to investigate if the amounts of mtDNA in liver, muscle and adipose tissue (subcutaneous and visceral depots) of Nellore bulls are associated with residual feed intake (RFI) and estimated heat production (EHP). Eighteen animals were individually fed in a feedlot for 90 days. RFI values were obtained by regression of dry matter intake (DMI) in relation to average daily gain (ADG) and mid-test metabolic body weight (BW). The animals were classified into low (more efficient) and high (less efficient) RFI groups. The bulls were then randomly distributed in individual pens where they were given excess feed twice daily to result in 5 to 10% orts for 90 d with diet containing 15% crude protein and 2.7 Mcal ME/kg DM. The heart rate (HR) of bulls was monitored for 4 consecutive days and used for calculation of EHP. Electrodes were fitted to bulls with stretch belts (POLAR RS400; Kempele, Finland). To calculate oxygen pulse (O2P), oxygen consumption was obtained using a facemask connected to the gas analyzer (EXHALYZER, ECOMedics, Zurich, Switzerland) and HR were simultaneously measured for 15 minutes period. Daily oxygen (O2) consumption was calculated by multiplying the volume of O2 per beat by total daily beats. EHP was calculated multiplying O2P by the average HR obtained during the 4 days, assuming 4.89 kcal/L of O2 to measure daily EHP that was expressed in kilocalories/day/kilogram metabolic BW (kcal/day/kg BW0.75). Blood samples were collected between days 45 and 90th after the beginning of the trial period in order to measure the concentration of hemoglobin and hematocrit. The bulls were slaughtered in an experimental slaughter house in accordance with current guidelines. Immediately after slaughter, a section of liver, a portion of longissimus thoracis (LT) muscle, plus a portion of subcutaneous fat (surrounding LT muscle) and portions of visceral fat (kidney, pelvis and inguinal fat) were collected. Samples of liver, muscle and adipose tissues were used to quantify mtDNA copy number per cell. The number of mtDNA copies was determined by normalization of mtDNA amount against a single copy nuclear gene (B2M). Mean of EHP, hemoglobin and hematocrit of high and low RFI bulls were compared using two-sample t-tests. Additionally, the one-way ANOVA was used to compare mtDNA quantification considering the mains effects of RFI groups. We found lower EHP (83.047 vs. 97.590 kcal/day/kgBW0.75; P < 0.10), hemoglobin concentration (13.533 vs. 15.108 g/dL; P < 0.10) and hematocrit percentage (39.3 vs. 43.6 %; P < 0.05) in low compared to high RFI bulls, respectively, which may be useful traits to identify efficient animals. However, no differences were observed between the mtDNA content in liver, muscle and adipose tissue of Nellore bulls with high and low RFI.

Keywords: bioenergetics, Bos indicus, feed efficiency, mitochondria

Procedia PDF Downloads 221
67 Universal Health Coverage 2019 in Indonesia: The Integration of Family Planning Services in Current Functioning Health System

Authors: Fathonah Siti, Ardiana Irma

Abstract:

Indonesia is currently on its track to achieve Universal Health Coverage (UHC) by 2019. The program aims to address issues on disintegration in the implementation and coverage of various health insurance schemes and fragmented fund pooling. Family planning service is covered as one of benefit packages under preventive care. However, little has been done to examine how family planning program are appropriately managed across levels of governments and how family planning services are delivered to the end user. The study is performed through focus group discussion to related policy makers and selected programmers at central and district levels. The study is also benefited from relevant studies on family planning in the UHC scheme and other supporting data. The study carefully investigates some programmatic implications when family planning is integrated in the UHC program encompassing the need to recalculate contraceptive logistics for beneficiaries (eligible couple); policy reformulation for contraceptive service provision including supply chain management; establishment of family planning standard of procedure; and a call to update Management Information System. The study confirms that there is a significant increase in the numbers of contraceptive commodities needs to be procured by the government. Holding an assumption that contraceptive prevalence rate and commodities cost will be as expected increasing at 0.5% annually, the government need to allocate almost IDR 5 billion by 2019, excluded fee for service. The government shifts its focus to maintain eligible health facilities under National Population and Family Planning Board networks. By 2019, the government has set strategies to anticipate the provision of family planning services to 45.340 health facilities distributed in 514 districts and 7 thousand sub districts. Clear division of authorities has been established among levels of governments. Three models of contraceptive supply planning have been developed and currently in the process of being institutionalized. Pre service training for family planning services has been piloted in 10 prominent universities. The position of private midwives has been appreciated as part of the system. To ensure the implementation of quality and health expenditure control, family planning standard has been established as a reference to determine set of services required to deliver to the clients properly and types of health facilities to conduct particular family planning services. Recognition to individual status of program participation has been acknowledged in the Family Enumeration since 2015. The data is precisely recorded by name by address for each family and its members. It supplies valuable information to 15.131 Family Planning Field Workers (FPFWs) to provide information and education related to family planning in an attempt to generate demand and maintain the participation of family planning acceptors who are program beneficiaries. Despite overwhelming efforts described above, some obstacles remain. The program experiences poor socialization and yet removes geographical barriers for those living in remote areas. Family planning services provided for this sub population conducted outside the scheme as a complement strategy. However, UHC program has brought remarkable improvement in access and quality of family planning services.

Keywords: beneficiary, family planning services, national population and family planning board, universal health coverage

Procedia PDF Downloads 156
66 Characterizing the Spatially Distributed Differences in the Operational Performance of Solar Power Plants Considering Input Volatility: Evidence from China

Authors: Bai-Chen Xie, Xian-Peng Chen

Abstract:

China has become the world's largest energy producer and consumer, and its development of renewable energy is of great significance to global energy governance and the fight against climate change. The rapid growth of solar power in China could help achieve its ambitious carbon peak and carbon neutrality targets early. However, the non-technical costs of solar power in China are much higher than at international levels, meaning that inefficiencies are rooted in poor management and improper policy design and that efficiency distortions have become a serious challenge to the sustainable development of the renewable energy industry. Unlike fossil energy generation technologies, the output of solar power is closely related to the volatile solar resource, and the spatial unevenness of solar resource distribution leads to potential efficiency spatial distribution differences. It is necessary to develop an efficiency evaluation method that considers the volatility of solar resources and explores the mechanism of the influence of natural geography and social environment on the spatially varying characteristics of efficiency distribution to uncover the root causes of managing inefficiencies. The study sets solar resources as stochastic inputs, introduces a chance-constrained data envelopment analysis model combined with the directional distance function, and measures the solar resource utilization efficiency of 222 solar power plants in representative photovoltaic bases in northwestern China. By the meta-frontier analysis, we measured the characteristics of different power plant clusters and compared the differences among groups, discussed the mechanism of environmental factors influencing inefficiencies, and performed statistical tests through the system generalized method of moments. Rational localization of power plants is a systematic project that requires careful consideration of the full utilization of solar resources, low transmission costs, and power consumption guarantee. Suitable temperature, precipitation, and wind speed can improve the working performance of photovoltaic modules, reasonable terrain inclination can reduce land cost, and the proximity to cities strongly guarantees the consumption of electricity. The density of electricity demand and high-tech industries is more important than resource abundance because they trigger the clustering of power plants to result in a good demonstration and competitive effect. To ensure renewable energy consumption, increased support for rural grids and encouraging direct trading between generators and neighboring users will provide solutions. The study will provide proposals for improving the full life-cycle operational activities of solar power plants in China to reduce high non-technical costs and improve competitiveness against fossil energy sources.

Keywords: solar power plants, environmental factors, data envelopment analysis, efficiency evaluation

Procedia PDF Downloads 61
65 Online Faculty Professional Development: An Approach to the Design Process

Authors: Marie Bountrogianni, Leonora Zefi, Krystle Phirangee, Naza Djafarova

Abstract:

Faculty development is critical for any institution as it impacts students’ learning experiences and faculty performance with regards to course delivery. With that in mind, The Chang School at Ryerson University embarked on an initiative to develop a comprehensive, relevant faculty development program for online faculty and instructors. Teaching Adult Learners Online (TALO) is a professional development program designed to build capacity among online teaching faculty to enhance communication/facilitation skills for online instruction and establish a Community of Practice to allow for opportunities for online faculty to network and exchange ideas and experiences. TALO is comprised of four online modules and each module provides three hours of learning materials. The topics focus on online teaching and learning experience, principles and practices, opportunities and challenges in online assessments as well as course design and development. TALO offers a unique experience for online instructors who are placed in the role of a student and an instructor through interactivities involving discussions, hands-on assignments, peer mentoring while experimenting with technological tools available for their online teaching. Through exchanges and informal peer mentoring, a small interdisciplinary community of practice has started to take shape. Successful participants have to meet four requirements for completion: i) participate actively in online discussions and activities, ii) develop a communication plan for the course they are teaching, iii) design one learning activity/or media component, iv) design one online learning module. This study adopted a mixed methods exploratory sequential design. For the qualitative phase of this study, a thorough literature review was conducted on what constitutes effective faculty development programs. Based on that review, the design team identified desired competencies for online teaching/facilitation and course design. Once the competencies were identified, a focus group interview with The Chang School teaching community was conducted as a needs assessment and to validate the competencies. In the quantitative phase, questionnaires were distributed to instructors and faculty after the program was launched to continue ongoing evaluation and revisions, in hopes of further improving the program to meet the teaching community’s needs. Four faculty members participated in a one-hour focus group interview. Major findings from the focus group interview revealed that for the training program, faculty wanted i) to better engage students online, ii) to enhance their online teaching with specific strategies, iii) to explore different ways to assess students online. 91 faculty members completed the questionnaire in which findings indicated that: i) the majority of faculty stated that they gained the necessary skills to demonstrate instructor presence through communication and use of technological tools provided, ii) increased faculty confidence with course management strategies, iii) learning from peers is most effective – the Community of Practice is strengthened and valued even more as program alumni become facilitators. Although this professional development program is not mandatory for online instructors, since its launch in Fall 2014, over 152 online instructors have successfully completed the program. A Community of Practice emerged as a result of the program and participants continue to exchange thoughts and ideas about online teaching and learning.

Keywords: community of practice, customized, faculty development, inclusive design

Procedia PDF Downloads 148
64 An Engaged Approach to Developing Tools for Measuring Caregiver Knowledge and Caregiver Engagement in Juvenile Type 1 Diabetes

Authors: V. Howard, R. Maguire, S. Corrigan

Abstract:

Background: Type 1 Diabetes (T1D) is a chronic autoimmune disease, typically diagnosed in childhood. T1D puts an enormous strain on families; controlling blood-glucose in children is difficult and the consequences of poor control for patient health are significant. Successful illness management and better health outcomes can be dependent on quality of caregiving. On diagnosis, parent-caregivers face a steep learning curve as T1D care requires a significant level of knowledge to inform complex decision making throughout the day. The majority of illness management is carried out in the home setting, independent of clinical health providers. Parent-caregivers vary in their level of knowledge and their level of engagement in applying this knowledge in the practice of illness management. Enabling researchers to quantify these aspects of the caregiver experience is key to identifying targets for psychosocial support interventions, which are desirable for reducing stress and anxiety in this highly burdened cohort, and supporting better health outcomes in children. Currently, there are limited tools available that are designed to capture this information. Where tools do exist, they are not comprehensive and do not adequately capture the lived experience. Objectives: Development of quantitative tools, informed by lived experience, to enable researchers gather data on parent-caregiver knowledge and engagement, which accurately represents the experience/cohort and enables exploration of questions that are of real-world value to the cohort themselves. Methods: This research employed an engaged approach to address the problem of quantifying two key aspects of caregiver diabetes management: Knowledge and engagement. The research process was multi-staged and iterative. Stage 1: Working from a constructivist standpoint, literature was reviewed to identify relevant questionnaires, scales and single-item measures of T1D caregiver knowledge and engagement, and harvest candidate questionnaire items. Stage 2: Aggregated findings from the review were circulated among a PPI (patient and public involvement) expert panel of caregivers (n=6), for discussion and feedback. Stage 3: In collaboration with the expert panel, data were interpreted through the lens of lived experience to create a long-list of candidate items for novel questionnaires. Items were categorized as either ‘knowledge’ or ‘engagement’. Stage 4: A Delphi-method process (iterative surveys) was used to prioritize question items and generate novel questions that further captured the lived experience. Stage 5: Both questionnaires were piloted to refine wording of text to increase accessibility and limit socially desirable responding. Stage 6: Tools were piloted using an online survey that was deployed using an online peer-support group for caregivers for Juveniles with T1D. Ongoing Research: 123 parent-caregivers completed the survey. Data analysis is ongoing to establish face and content validity qualitatively and through exploratory factor analysis. Reliability will be established using an alternative-form method and Cronbach’s alpha will assess internal consistency. Work will be completed by early 2024. Conclusion: These tools will enable researchers to gain deeper insights into caregiving practices among parents of juveniles with T1D. Development was driven by lived experience, illustrating the value of engaged research at all levels of the research process.

Keywords: caregiving, engaged research, juvenile type 1 diabetes, quantified engagement and knowledge

Procedia PDF Downloads 29
63 A Corpus-Based Analysis of "MeToo" Discourse in South Korea: Coverage Representation in Korean Newspapers

Authors: Sun-Hee Lee, Amanda Kraley

Abstract:

The “MeToo” movement is a social movement against sexual abuse and harassment. Though the hashtag went viral in 2017 following different cultural flashpoints in different countries, the initial response was quiet in South Korea. This radically changed in January 2018, when a high-ranking senior prosecutor, Seo Ji-hyun, gave a televised interview discussing being sexually assaulted by a colleague. Acknowledging public anger, particularly among women, on the long-existing problems of sexual harassment and abuse, the South Korean media have focused on several high-profile cases. Analyzing the media representation of these cases is a window into the evolving South Korean discourse around “MeToo.” This study presents a linguistic analysis of “MeToo” discourse in South Korea by utilizing a corpus-based approach. The term corpus (pl. corpora) is used to refer to electronic language data, that is, any collection of recorded instances of spoken or written language. A “MeToo” corpus has been collected by extracting newspaper articles containing the keyword “MeToo” from BIGKinds, big data analysis, and service and Nexis Uni, an online academic database search engine, to conduct this language analysis. The corpus analysis explores how Korean media represent accusers and the accused, victims and perpetrators. The extracted data includes 5,885 articles from four broadsheet newspapers (Chosun, JoongAng, Hangyore, and Kyunghyang) and 88 articles from two Korea-based English newspapers (Korea Times and Korea Herald) between January 2017 and November 2020. The information includes basic data analysis with respect to keyword frequency and network analysis and adds refined examinations of select corpus samples through naming strategies, semantic relations, and pragmatic properties. Along with the exponential increase of the number of articles containing the keyword “MeToo” from 104 articles in 2017 to 3,546 articles in 2018, the network and keyword analysis highlights ‘US,’ ‘Harvey Weinstein’, and ‘Hollywood,’ as keywords for 2017, with articles in 2018 highlighting ‘Seo Ji-Hyun, ‘politics,’ ‘President Moon,’ ‘An Ui-Jeong, ‘Lee Yoon-taek’ (the names of perpetrators), and ‘(Korean) society.’ This outcome demonstrates the shift of media focus from international affairs to domestic cases. Another crucial finding is that word ‘defamation’ is widely distributed in the “MeToo” corpus. This relates to the South Korean legal system, in which a person who defames another by publicly alleging information detrimental to their reputation—factual or fabricated—is punishable by law (Article 307 of the Criminal Act of Korea). If the defamation occurs on the internet, it is subject to aggravated punishment under the Act on Promotion of Information and Communications Network Utilization and Information Protection. These laws, in particular, have been used against accusers who have publicly come forward in the wake of “MeToo” in South Korea, adding an extra dimension of risk. This corpus analysis of “MeToo” newspaper articles contributes to the analysis of the media representation of the “MeToo” movement and sheds light on the shifting landscape of gender relations in the public sphere in South Korea.

Keywords: corpus linguistics, MeToo, newspapers, South Korea

Procedia PDF Downloads 191
62 Exploring Safety Culture in Interventional Radiology: A Cross-Sectional Survey on Team Members' Attitudes

Authors: Anna Bjällmark, Victoria Persson, Bodil Karlsson, May Bazzi

Abstract:

Introduction: Interventional radiology (IR) is a continuously growing discipline that allows minimally invasive treatments of various medical conditions. The IR environment is, in several ways, comparable to the complex and accident-prone operation room (OR) environment. This implies that the IR environment may also be associated with various types of risks related to the work process and communication in the team. Patient safety is a central aspect of healthcare and involves the prevention and reduction of adverse events related to patient care. To maintain patient safety, it is crucial to build a safety culture where the staff are encouraged to report events and incidents that may have affected patient safety. It is also important to continuously evaluate the staff´s attitudes to patient safety. Despite the increasing number of IR procedures, research on the staff´s view regarding patients is lacking. Therefore, the main aim of the study was to describe and compare the IR team members' attitudes to patient safety. The secondary aim was to evaluate whether the WHO safety checklist was routinely used for IR procedures. Methods: An electronic survey was distributed to 25 interventional units in Sweden. The target population was the staff working in the IR team, i.e., physicians, radiographers, nurses, and assistant nurses. A modified version of the Safety Attitudes Questionnaire (SAQ) was used. Responses from 19 of 25 IR units (44 radiographers, 18 physicians, 5 assistant nurses, and 1 nurse) were received. The respondents rated their level of agreement for 27 items related to safety culture on a five-point Likert scale ranging from “Disagree strongly” to “Agree strongly.” Data were analyzed statistically using SPSS. The percentage of positive responses (PPR) was calculated by taking the percentage of respondents who got a scale score of 75 or higher. The respondents rated which corresponded to response options “Agree slightly” or “Agree strongly”. Thus, average scores ≥ 75% were classified as “positive” and average scores < 75% were classified as “non-positive”. Findings: The results indicated that the IR team had the highest factor scores and the highest percentages of positive responses in relation to job satisfaction (90/94%), followed by teamwork climate (85/92%). In contrast, stress recognition received the lowest ratings (54/25%). Attitudes related to these factors were relatively consistent between different professions, with only a few significant differences noted (Factor score: p=0.039 for job satisfaction, p=0.050 for working conditions. Percentage of positive responses: p=0.027 for perception of management). Radiographers tended to report slightly lower values compared to other professions for these factors (p<0.05). The respondents reported that the WHO safety checklist was not routinely used at their IR unit but acknowledged its importance for patient safety. Conclusion: This study reported high scores concerning job satisfaction and teamwork climate but lower scores concerning perception of management and stress recognition indicating that the latter are areas of improvement. Attitudes remained relatively consistent among the professions, but the radiographers reported slightly lower values in terms of job satisfaction and perception of the management. The WHO safety checklist was considered important for patient safety.

Keywords: interventional radiology, patient safety, safety attitudes questionnaire, WHO safety checklist

Procedia PDF Downloads 40
61 Integrating Radar Sensors with an Autonomous Vehicle Simulator for an Enhanced Smart Parking Management System

Authors: Mohamed Gazzeh, Bradley Null, Fethi Tlili, Hichem Besbes

Abstract:

The burgeoning global ownership of personal vehicles has posed a significant strain on urban infrastructure, notably parking facilities, leading to traffic congestion and environmental concerns. Effective parking management systems (PMS) are indispensable for optimizing urban traffic flow and reducing emissions. The most commonly deployed systems nowadays rely on computer vision technology. This paper explores the integration of radar sensors and simulation in the context of smart parking management. We concentrate on radar sensors due to their versatility and utility in automotive applications, which extends to PMS. Additionally, radar sensors play a crucial role in driver assistance systems and autonomous vehicle development. However, the resource-intensive nature of radar data collection for algorithm development and testing necessitates innovative solutions. Simulation, particularly the monoDrive simulator, an internal development tool used by NI the Test and Measurement division of Emerson, offers a practical means to overcome this challenge. The primary objectives of this study encompass simulating radar sensors to generate a substantial dataset for algorithm development, testing, and, critically, assessing the transferability of models between simulated and real radar data. We focus on occupancy detection in parking as a practical use case, categorizing each parking space as vacant or occupied. The simulation approach using monoDrive enables algorithm validation and reliability assessment for virtual radar sensors. It meticulously designed various parking scenarios, involving manual measurements of parking spot coordinates, orientations, and the utilization of TI AWR1843 radar. To create a diverse dataset, we generated 4950 scenarios, comprising a total of 455,400 parking spots. This extensive dataset encompasses radar configuration details, ground truth occupancy information, radar detections, and associated object attributes such as range, azimuth, elevation, radar cross-section, and velocity data. The paper also addresses the intricacies and challenges of real-world radar data collection, highlighting the advantages of simulation in producing radar data for parking lot applications. We developed classification models based on Support Vector Machines (SVM) and Density-Based Spatial Clustering of Applications with Noise (DBSCAN), exclusively trained and evaluated on simulated data. Subsequently, we applied these models to real-world data, comparing their performance against the monoDrive dataset. The study demonstrates the feasibility of transferring models from a simulated environment to real-world applications, achieving an impressive accuracy score of 92% using only one radar sensor. This finding underscores the potential of radar sensors and simulation in the development of smart parking management systems, offering significant benefits for improving urban mobility and reducing environmental impact. The integration of radar sensors and simulation represents a promising avenue for enhancing smart parking management systems, addressing the challenges posed by the exponential growth in personal vehicle ownership. This research contributes valuable insights into the practicality of using simulated radar data in real-world applications and underscores the role of radar technology in advancing urban sustainability.

Keywords: autonomous vehicle simulator, FMCW radar sensors, occupancy detection, smart parking management, transferability of models

Procedia PDF Downloads 55
60 A Corpus-Based Study on the Lexical, Syntactic and Sequential Features across Interpreting Types

Authors: Qianxi Lv, Junying Liang

Abstract:

Among the various modes of interpreting, simultaneous interpreting (SI) is regarded as a ‘complex’ and ‘extreme condition’ of cognitive tasks while consecutive interpreters (CI) do not have to share processing capacity between tasks. Given that SI exerts great cognitive demand, it makes sense to posit that the output of SI may be more compromised than that of CI in the linguistic features. The bulk of the research has stressed the varying cognitive demand and processes involved in different modes of interpreting; however, related empirical research is sparse. In keeping with our interest in investigating the quantitative linguistic factors discriminating between SI and CI, the current study seeks to examine the potential lexical simplification, syntactic complexity and sequential organization mechanism with a self-made inter-model corpus of transcribed simultaneous and consecutive interpretation, translated speech and original speech texts with a total running word of 321960. The lexical features are extracted in terms of the lexical density, list head coverage, hapax legomena, and type-token ratio, as well as core vocabulary percentage. Dependency distance, an index for syntactic complexity and reflective of processing demand is employed. Frequency motif is a non-grammatically-bound sequential unit and is also used to visualize the local function distribution of interpreting the output. While SI is generally regarded as multitasking with high cognitive load, our findings evidently show that CI may impose heavier or taxing cognitive resource differently and hence yields more lexically and syntactically simplified output. In addition, the sequential features manifest that SI and CI organize the sequences from the source text in different ways into the output, to minimize the cognitive load respectively. We reasoned the results in the framework that cognitive demand is exerted both on maintaining and coordinating component of Working Memory. On the one hand, the information maintained in CI is inherently larger in volume compared to SI. On the other hand, time constraints directly influence the sentence reformulation process. The temporal pressure from the input in SI makes the interpreters only keep a small chunk of information in the focus of attention. Thus, SI interpreters usually produce the output by largely retaining the source structure so as to relieve the information from the working memory immediately after formulated in the target language. Conversely, CI interpreters receive at least a few sentences before reformulation, when they are more self-paced. CI interpreters may thus tend to retain and generate the information in a way to lessen the demand. In other words, interpreters cope with the high demand in the reformulation phase of CI by generating output with densely distributed function words, more content words of higher frequency values and fewer variations, simpler structures and more frequently used language sequences. We consequently propose a revised effort model based on the result for a better illustration of cognitive demand during both interpreting types.

Keywords: cognitive demand, corpus-based, dependency distance, frequency motif, interpreting types, lexical simplification, sequential units distribution, syntactic complexity

Procedia PDF Downloads 146
59 Concept Mapping to Reach Consensus on an Antibiotic Smart Use Strategy Model to Promote and Support Appropriate Antibiotic Prescribing in a Hospital, Thailand

Authors: Phenphak Horadee, Rodchares Hanrinth, Saithip Suttiruksa

Abstract:

Inappropriate use of antibiotics has happened in several hospitals, Thailand. Drug use evaluation (DUE) is one strategy to overcome this difficulty. However, most community hospitals still encounter incomplete evaluation resulting overuse of antibiotics with high cost. Consequently, drug-resistant bacteria have been rising due to inappropriate antibiotic use. The aim of this study was to involve stakeholders in conceptualizing, developing, and prioritizing a feasible intervention strategy to promote and support appropriate antibiotic prescribing in a community hospital, Thailand. Study antibiotics included four antibiotics such as Meropenem, Piperacillin/tazobactam, Amoxicillin/clavulanic acid, and Vancomycin. The study was conducted for the 1-year period between March 1, 2018, and March 31, 2019, in a community hospital in the northeastern part of Thailand. Concept mapping was used in a purposive sample, including doctors (one was an administrator), pharmacists, and nurses who involving drug use evaluation of antibiotics. In-depth interviews for each participant and survey research were conducted to seek the problems for inappropriate use of antibiotics based on drug use evaluation system. Seventy-seven percent of DUE reported appropriate antibiotic prescribing, which still did not reach the goal of 80 percent appropriateness. Meropenem led other antibiotics for inappropriate prescribing. The causes of the unsuccessful DUE program were classified into three themes such as personnel, lack of public relation and communication, and unsupported policy and impractical regulations. During the first meeting, stakeholders (n = 21) expressed the generation of interventions. During the second meeting, participants who were almost the same group of people in the first meeting (n = 21) were requested to independently rate the feasibility and importance of each idea and to categorize them into relevant clusters to facilitate multidimensional scaling and hierarchical cluster analysis. The outputs of analysis included the idealist, cluster list, point map, point rating map, cluster map, and cluster rating map. All of these were distributed to participants (n = 21) during the third meeting to reach consensus on an intervention model. The final proposed intervention strategy included 29 feasible and crucial interventions in seven clusters: development of information technology system, establishing policy and taking it into the action plan, proactive public relations of the policy, action plan and workflow, in cooperation of multidisciplinary teams in drug use evaluation, work review and evaluation with performance reporting, promoting and developing professional and clinical skill for staff with training programs, and developing practical drug use evaluation guideline for antibiotics. These interventions are relevant and fit to several intervention strategies for antibiotic stewardship program in many international organizations such as participation of the multidisciplinary team, developing information technology to support antibiotic smart use, and communication. These interventions were prioritized for implementation over a 1-year period. Once the possibility of each activity or plan is set up, the proposed program could be applied and integrated into hospital policy after evaluating plans. Effectiveness of each intervention could be promoted to other community hospitals to promote and support antibiotic smart use.

Keywords: antibiotic, concept mapping, drug use evaluation, multidisciplinary teams

Procedia PDF Downloads 97
58 Protocol for Dynamic Load Distributed Low Latency Web-Based Augmented Reality and Virtual Reality

Authors: Rohit T. P., Sahil Athrij, Sasi Gopalan

Abstract:

Currently, the content entertainment industry is dominated by mobile devices. As the trends slowly shift towards Augmented/Virtual Reality applications the computational demands on these devices are increasing exponentially and we are already reaching the limits of hardware optimizations. This paper proposes a software solution to this problem. By leveraging the capabilities of cloud computing we can offload the work from mobile devices to dedicated rendering servers that are way more powerful. But this introduces the problem of latency. This paper introduces a protocol that can achieve high-performance low latency Augmented/Virtual Reality experience. There are two parts to the protocol, 1) In-flight compression The main cause of latency in the system is the time required to transmit the camera frame from client to server. The round trip time is directly proportional to the amount of data transmitted. This can therefore be reduced by compressing the frames before sending. Using some standard compression algorithms like JPEG can result in minor size reduction only. Since the images to be compressed are consecutive camera frames there won't be a lot of changes between two consecutive images. So inter-frame compression is preferred. Inter-frame compression can be implemented efficiently using WebGL but the implementation of WebGL limits the precision of floating point numbers to 16bit in most devices. This can introduce noise to the image due to rounding errors, which will add up eventually. This can be solved using an improved interframe compression algorithm. The algorithm detects changes between frames and reuses unchanged pixels from the previous frame. This eliminates the need for floating point subtraction thereby cutting down on noise. The change detection is also improved drastically by taking the weighted average difference of pixels instead of the absolute difference. The kernel weights for this comparison can be fine-tuned to match the type of image to be compressed. 2) Dynamic Load distribution Conventional cloud computing architectures work by offloading as much work as possible to the servers, but this approach can cause a hit on bandwidth and server costs. The most optimal solution is obtained when the device utilizes 100% of its resources and the rest is done by the server. The protocol balances the load between the server and the client by doing a fraction of the computing on the device depending on the power of the device and network conditions. The protocol will be responsible for dynamically partitioning the tasks. Special flags will be used to communicate the workload fraction between the client and the server and will be updated in a constant interval of time ( or frames ). The whole of the protocol is designed so that it can be client agnostic. Flags are available to the client for resetting the frame, indicating latency, switching mode, etc. The server can react to client-side changes on the fly and adapt accordingly by switching to different pipelines. The server is designed to effectively spread the load and thereby scale horizontally. This is achieved by isolating client connections into different processes.

Keywords: 2D kernelling, augmented reality, cloud computing, dynamic load distribution, immersive experience, mobile computing, motion tracking, protocols, real-time systems, web-based augmented reality application

Procedia PDF Downloads 52
57 Analyzing Perceptions of Leadership Capacities After a Year-Long Leadership Development Training: An Exploratory Study of School Leaders in South Africa

Authors: Norma Kok, Diemo Masuko, Thandokazi Dlongwana, Komala Pillay

Abstract:

CONTEXT: While many school principals have been outstanding teachers and have inherent leadership potential, many have not had access to the quality of leadership development or support that empowers them to produce high-quality education outcomes in extremely challenging circumstances. Further, school leaders in under-served communities face formidable challenges arising from insufficient infrastructure, overcrowded classrooms, socio-economic challenges within the community, and insufficient parental involvement, all of which put a strain on principals’ ability to lead their schools effectively. In addition few school leaders have access to other supportive networks, and many do not know how to build and leverage social capital to create opportunities for their schools and learners. Moreover, we know that fostering parental involvement in their children’s learning improves a child’s morale, attitude, and academic achievement across all subject areas, and promotes better behaviour and social adjustment. Citizen Leader Lab facilitates the Partners for Possibility (PfP) programme to provide leadership development and support to school leaders serving under-resourced communities in South Africa to create effective environments of learning. This is done by creating partnerships between school leaders and private-sector business leaders over a 12-month period. (185) OBJECTIVES: To explore school leaders’ perceptions of their leadership capacities and changes at their schools after being exposed to a year-long leadership development training programme. METHODS: School leaders gained new leadership capacities e.g. resilience, improved confidence, communication and conflict resolution skills - catalysing into improved cultures of collaborative decision-making and environments for enhanced teaching and learningprogramme based on the 70:20:10 model whereby: 10% of learning comes from workshops, 20% of learning takes place through peer learning and 70% of learning occurs through experiential learning as partnerships work together to identify and tackle challenges in targeted schools. Participants completed a post-programme questionnaire consisting of structured and unstructured questions and semi-structured interviews were conducted with them and their business leader. The interviews were audio-recorded, transcribed and thematic content analysis was undertaken. The analysis was inductive and emerging themes were identified. A code list was generated after coding was undertaken using computer software (Dedoose). Quantitative data gathered from surveys was aggregated and analysed. RESULTS: School leadership found the programme interesting and rewarding. They gained new leadership capacities such as resilience, improved confidence, communication and conflict resolution skills - catalyzing into improved cultures of collaborative decision-making and environments for enhanced teaching and learning. New networks resulted in tangible outcomes such as upgrades to school infrastructure, water and sanitation, vegetable gardens at schools resulting in nutrition for learners and/or intangible outcomes such as skills for members of school management teams (SMTs). Collaborative leadership led to SMTs being more aligned, efficient, and cohesive; and teachers being more engaged and motivated. Notable positive changes at the school inspired parents and community members to become more actively involved in the school and in their children’s education. CONCLUSION: The PfP programme leads to improved leadership capacities and improved school culture which leads to improved teaching and learning and new resources for schools.

Keywords: collaborative decision-making, collaborative leadership, community involvement, confidence

Procedia PDF Downloads 66
56 A Simulation Study of Direct Injection Compressed Natural Gas Spark Ignition Engine Performance Utilizing Turbulent Jet Ignition with Controlled Air Charge

Authors: Siyamak Ziyaei, Siti Khalijah Mazlan, Petros Lappas

Abstract:

Compressed Natural Gas (CNG) mainly consists of Methane CH₄ and has a low carbon to hydrogen ratio relative to other hydrocarbons. As a result, it has the potential to reduce CO₂ emissions by more than 20% relative to conventional fuels like diesel or gasoline Although Natural Gas (NG) has environmental advantages compared to other hydrocarbon fuels whether they are gaseous or liquid, its main component, CH₄, burns at a slower rate than conventional fuels A higher pressure and a leaner cylinder environment will overemphasize slow burn characteristic of CH₄. Lean combustion and high compression ratios are well-known methods for increasing the efficiency of internal combustion engines. In order to achieve successful CNG lean combustion in Spark Ignition (SI) engines, a strong ignition system is essential to avoid engine misfires, especially in ultra-lean conditions. Turbulent Jet Ignition (TJI) is an ignition system that employs a pre-combustion chamber to ignite the lean fuel mixture in the main combustion chamber using a fraction of the total fuel per cycle. TJI enables ultra-lean combustion by providing distributed ignition sites through orifices. The fast burn rate provided by TJI enables the ordinary SI engine to be comparable to other combustion systems such as Homogeneous Charge Compression Ignition (HCCI) or Controlled Auto-Ignition (CAI) in terms of thermal efficiency, through the increased levels of dilution without the need of sophisticated control systems. Due to the physical geometry of TJIs, which contain small orifices that connect the prechamber to the main chamber, scavenging is one of the main factors that reduce TJI performance. Specifically, providing the right mixture of fuel and air has been identified as a key challenge. The reason for this is the insufficient amount of air that is pushed into the pre-chamber during each compression stroke. There is also the problem that combustion residual gases such as CO₂, CO and NOx from the previous combustion cycle dilute the pre- chamber fuel-air mixture preventing rapid combustion in the pre-chamber. An air-controlled active TJI is presented in this paper in order to address these issues. By applying air to the pre-chamber at a sufficient pressure, residual gases are exhausted, and the air-fuel ratio is controlled within the pre-chamber, thereby improving the quality of combustion. This paper investigates the 3D-simulated combustion characteristics of a Direct Injected (DI-CNG) fuelled SI en- gine with a pre-chamber equipped with an air channel by using AVL FIRE software. Experiments and simulations were performed at the Worldwide Mapping Point (WWMP) at 1500 Revolutions Per Minute (RPM), 3.3 bar Indicated Mean Effective Pressure (IMEP), using only conventional spark plugs as the baseline. After validating simulation data, baseline engine conditions were set for all simulation scenarios at λ=1. Following that, the pre-chambers with and without an auxiliary fuel supply were simulated. In the simulated (DI-CNG) SI engine, active TJI was observed to perform better than passive TJI and spark plug. In conclusion, the active pre-chamber with an air channel demon-strated an improved thermal efficiency (ηth) over other counterparts and conventional spark ignition systems.

Keywords: turbulent jet ignition, active air control turbulent jet ignition, pre-chamber ignition system, active and passive pre-chamber, thermal efficiency, methane combustion, internal combustion engine combustion emissions

Procedia PDF Downloads 66
55 Utilization of Functionalized Biochar from Water Hyacinth (Eichhornia crassipes) as Green Nano-Fertilizers

Authors: Adewale Tolulope Irewale, Elias Emeka Elemike, Christian O. Dimkpa, Emeka Emmanuel Oguzie

Abstract:

As the global population steadily approaches the 10billion mark, the world is currently faced with two major challenges among others – accessing sustainable and clean energy, and food security. Accessing cleaner and sustainable energy sources to drive global economy and technological advancement, and feeding the teeming human population require sustainable, innovative, and smart solutions. To solve the food production problem, producers have relied on fertilizers as a way of improving crop productivity. Commercial inorganic fertilizers, which is employed to boost agricultural food production, however, pose significant ecological sustainability and economic problems including soil and water pollution, reduced input efficiency, development of highly resistant weeds, micronutrient deficiency, soil degradation, and increased soil toxicity. These ecological and sustainability concerns have raised uncertainties about the continued effectiveness of conventional fertilizers. With the application of nanotechnology, plant biomass upcycling offers several advantages in greener energy production and sustainable agriculture through reduction of environmental pollution, increasing soil microbial activity, recycling carbon thereby reducing GHG emission, and so forth. This innovative technology has the potential for a circular economy and creating a sustainable agricultural practice. Nanomaterials have the potential to greatly enhance the quality and nutrient composition of organic biomass which in turn, allows for the conversion of biomass into nanofertilizers that are potentially more efficient. Water hyacinth plant harvested from an inland water at Warri, Delta State Nigeria were air-dried and milled into powder form. The dry biomass were used to prepare biochar at a pre-determined temperature in an oxygen deficient atmosphere. Physicochemical analysis of the resulting biochar was carried out to determine its porosity and general morphology using the Scanning Transmission Electron Microscopy (STEM). The functional groups (-COOH, -OH, -NH2, -CN, -C=O) were assessed using the Fourier Transform InfraRed Spectroscopy (FTIR) while the heavy metals (Cr, Cu, Fe, Pb, Mg, Mn) were analyzed using Inductively Coupled Plasma – Optical Emission Spectrometry (ICP-OES). Impregnation of the biochar with nanonutrients were achieved under varied conditions of pH, temperature, nanonutrient concentrations and resident time to achieve optimum adsorption. Adsorption and desorption studies were carried out on the resulting nanofertilizer to determine kinetics for the potential nutrients’ bio-availability to plants when used as green fertilizers. Water hyacinth (Eichhornia crassipes) which is an aggressively invasive aquatic plant known for its rapid growth and profusion is being examined in this research to harness its biomass as a sustainable feedstock to formulate functionalized nano-biochar fertilizers, offering various benefits including water hyacinth biomass upcycling, improved nutrient delivery to crops and aquatic ecosystem remediation. Altogether, this work aims to create output values in the three dimensions of environmental, economic, and social benefits.

Keywords: biochar-based nanofertilizers, eichhornia crassipes, greener agriculture, sustainable ecosystem, water hyacinth

Procedia PDF Downloads 38