Search results for: early childhood settings
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5135

Search results for: early childhood settings

515 Photovoltaic Modules Fault Diagnosis Using Low-Cost Integrated Sensors

Authors: Marjila Burhanzoi, Kenta Onohara, Tomoaki Ikegami

Abstract:

Faults in photovoltaic (PV) modules should be detected to the greatest extent as early as possible. For that conventional fault detection methods such as electrical characterization, visual inspection, infrared (IR) imaging, ultraviolet fluorescence and electroluminescence (EL) imaging are used, but they either fail to detect the location or category of fault, or they require expensive equipment and are not convenient for onsite application. Hence, these methods are not convenient to use for monitoring small-scale PV systems. Therefore, low cost and efficient inspection techniques with the ability of onsite application are indispensable for PV modules. In this study in order to establish efficient inspection technique, correlation between faults and magnetic flux density on the surface is of crystalline PV modules are investigated. Magnetic flux on the surface of normal and faulted PV modules is measured under the short circuit and illuminated conditions using two different sensor devices. One device is made of small integrated sensors namely 9-axis motion tracking sensor with a 3-axis electronic compass embedded, an IR temperature sensor, an optical laser position sensor and a microcontroller. This device measures the X, Y and Z components of the magnetic flux density (Bx, By and Bz) few mm above the surface of a PV module and outputs the data as line graphs in LabVIEW program. The second device is made of a laser optical sensor and two magnetic line sensor modules consisting 16 pieces of magnetic sensors. This device scans the magnetic field on the surface of PV module and outputs the data as a 3D surface plot of the magnetic flux intensity in a LabVIEW program. A PC equipped with LabVIEW software is used for data acquisition and analysis for both devices. To show the effectiveness of this method, measured results are compared to those of a normal reference module and their EL images. Through the experiments it was confirmed that the magnetic field in the faulted areas have different profiles which can be clearly identified in the measured plots. Measurement results showed a perfect correlation with the EL images and using position sensors it identified the exact location of faults. This method was applied on different modules and various faults were detected using it. The proposed method owns the ability of on-site measurement and real-time diagnosis. Since simple sensors are used to make the device, it is low cost and convenient to be sued by small-scale or residential PV system owners.

Keywords: fault diagnosis, fault location, integrated sensors, PV modules

Procedia PDF Downloads 224
514 Informed Urban Design: Minimizing Urban Heat Island Intensity via Stochastic Optimization

Authors: Luis Guilherme Resende Santos, Ido Nevat, Leslie Norford

Abstract:

The Urban Heat Island (UHI) is characterized by increased air temperatures in urban areas compared to undeveloped rural surrounding environments. With urbanization and densification, the intensity of UHI increases, bringing negative impacts on livability, health and economy. In order to reduce those effects, it is required to take into consideration design factors when planning future developments. Given design constraints such as population size and availability of area for development, non-trivial decisions regarding the buildings’ dimensions and their spatial distribution are required. We develop a framework for optimization of urban design in order to jointly minimize UHI intensity and buildings’ energy consumption. First, the design constraints are defined according to spatial and population limits in order to establish realistic boundaries that would be applicable in real life decisions. Second, the tools Urban Weather Generator (UWG) and EnergyPlus are used to generate outputs of UHI intensity and total buildings’ energy consumption, respectively. Those outputs are changed based on a set of variable inputs related to urban morphology aspects, such as building height, urban canyon width and population density. Lastly, an optimization problem is cast where the utility function quantifies the performance of each design candidate (e.g. minimizing a linear combination of UHI and energy consumption), and a set of constraints to be met is set. Solving this optimization problem is difficult, since there is no simple analytic form which represents the UWG and EnergyPlus models. We therefore cannot use any direct optimization techniques, but instead, develop an indirect “black box” optimization algorithm. To this end we develop a solution that is based on stochastic optimization method, known as the Cross Entropy method (CEM). The CEM translates the deterministic optimization problem into an associated stochastic optimization problem which is simple to solve analytically. We illustrate our model on a typical residential area in Singapore. Due to fast growth in population and built area and land availability generated by land reclamation, urban planning decisions are of the most importance for the country. Furthermore, the hot and humid climate in the country raises the concern for the impact of UHI. The problem presented is highly relevant to early urban design stages and the objective of such framework is to guide decision makers and assist them to include and evaluate urban microclimate and energy aspects in the process of urban planning.

Keywords: building energy consumption, stochastic optimization, urban design, urban heat island, urban weather generator

Procedia PDF Downloads 131
513 Leptospira Lipl32-Specific Antibodies: Therapeutic Property, Epitopes Characterization and Molecular Mechanisms of Neutralization

Authors: Santi Maneewatchararangsri, Wanpen Chaicumpa, Patcharin Saengjaruk, Urai Chaisri

Abstract:

Leptospirosis is a globally neglected disease that continues to be a significant public health and veterinary burden, with millions of cases reported each year. Early and accurate differential diagnosis of leptospirosis from other febrile illnesses and the development of a broad spectrum of leptospirosis vaccines are needed. The LipL32 outer membrane lipoprotein is a member of Leptospira adhesive matrices and has been found to exert hemolytic activity to erythrocytes in vitro. Therefore, LipL32 is regarded as a potential target for diagnosis, broad-spectrum leptospirosis vaccines, and for passive immunotherapy. In this study, we established LipL32-specific mouse monoclonal antibodies, mAbLPF1 and mAbLPF2, and their respective mouse- and humanized-engineered single chain variable fragment (ScFv). Their antibodies’ neutralizing activities against Leptospira-mediated hemolysis in vitro, and the therapeutic efficacy of mAbs against heterologous Leptospira infected hamsters were demonstrated. The epitope peptide of mAb LPF1 was mapped to a non-contiguous carboxy-terminal β-turn and amphipathic α-helix of LipL32 structure contributing to phospholipid/host cell adhesion and membrane insertion. We found that the mAbLPF2 epitope was located on the interacting loop of peptide binding groove of the LipL32 molecule responsible for interactions with host constituents. Epitope sequences are highly conserved among Leptospira spp. and are absent from the LipL32 superfamily of other microorganisms. Both epitopes are surface-exposed, readily accessible by mAbs, and immunogenic. However, they are less dominant when revealed by LipL32-specific immunoglobulins from leptospirosis-patient sera and rabbit hyperimmune serum raised by whole Leptospira. Our study also demonstrated an adhesion inhibitory activity of LipL32 protein to host membrane components and cells mediated by mAbs as well as an anti-hemolytic activity of the respective antibodies. The therapeutic antibodies, particularly the humanized-ScFv, have a potential for further development as non-drug therapeutic agent for human leptospirosis, especially in subjects allergic to antibiotics. The epitope peptides recognized by two therapeutic mAbs have potential use as tools for structure-function studies. Finally, protective peptides may be used as a target for epitope-based vaccines for control of leptospirosis.

Keywords: leptospira lipl32-specific antibodies, therapeutic epitopes, epitopes characterization, immunotherapy

Procedia PDF Downloads 297
512 A Novel Nano-Chip Card Assay as Rapid Test for Diagnosis of Lymphatic Filariasis Compared to Nano-Based Enzyme Linked Immunosorbent Assay

Authors: Ibrahim Aly, Manal Ahmed, Mahmoud M. El-Shall

Abstract:

Filariasis is a parasitic disease caused by small roundworms. The filarial worms are transmitted and spread by blood-feeding black flies and mosquitoes. Lymphatic filariasis (Elephantiasis) is caused by Wuchereriabancrofti, Brugiamalayi, and Brugiatimori. Elimination of Lymphatic filariasis necessitates an increasing demand for valid, reliable, and rapid diagnostic kits. Nanodiagnostics involve the use of nanotechnology in clinical diagnosis to meet the demands for increased sensitivity, specificity, and early detection in less time. The aim of this study was to evaluate the nano-based enzymelinked immunosorbent assay (ELISA) and novel nano-chip card as a rapid test for detection of filarial antigen in serum samples of human filariasis in comparison with traditional -ELISA. Serum samples were collected from an infected human with filarial gathered across Egypt's governorates. After receiving informed consenta total of 45 blood samples of infected individuals residing in different villages in Gharbea governorate, which isa nonendemic region for bancroftianfilariasis, healthy persons living in nonendemic locations (20 persons), as well as sera from 20 other parasites, affected patients were collected. The microfilaria was checked in thick smears of 20 µl night blood samples collected during 20-22 hrs. All of these individuals underwent the following procedures: history taking, clinical examination, and laboratory investigations, which included examination of blood samples for microfilaria using thick blood film and serological tests for detection of the circulating filarial antigen using polyclonal antibody- ELISA, nano-based ELISA, and nano-chip card. In the present study, a recently reported polyoclonal antibody specific to tegumental filarial antigen was used in developing nano-chip card and nano-ELISA compared to traditional ELISA for the detection of circulating filarial antigen in sera of patients with bancroftianfilariasis. The performance of the ELISA was evaluated using 45 serum samples. The ELISA was positive with sera from microfilaremicbancroftianfilariasis patients (n = 36) with a sensitivity of 80 %. Circulating filarial antigen was detected in 39/45 patients who were positive for circulating filarial antigen using nano-ELISA with a sensitivity of 86.6 %. On the other hand, 42 out of 45 patients were positive for circulating filarial antigen using nano-chip card with a sensitivity of 93.3%.In conclusion, using a novel nano-chip assay could potentially be a promising alternative antigen detection test for bancroftianfilariasis.

Keywords: lymphatic filariasis, nanotechnology, rapid diagnosis, elisa technique

Procedia PDF Downloads 115
511 Sources of Precipitation and Hydrograph Components of the Sutri Dhaka Glacier, Western Himalaya

Authors: Ajit Singh, Waliur Rahaman, Parmanand Sharma, Laluraj C. M., Lavkush Patel, Bhanu Pratap, Vinay Kumar Gaddam, Meloth Thamban

Abstract:

The Himalayan glaciers are the potential source of perennial water supply to Asia’s major river systems like the Ganga, Brahmaputra and the Indus. In order to improve our understanding about the source of precipitation and hydrograph components in the interior Himalayan glaciers, it is important to decipher the sources of moisture and their contribution to the glaciers in this river system. In doing so, we conducted an extensive pilot study in a Sutri Dhaka glacier, western Himalaya during 2014-15. To determine the moisture sources, rain, surface snow, ice, and stream meltwater samples were collected and analyzed for stable oxygen (δ¹⁸O) and hydrogen (δD) isotopes. A two-component hydrograph separation was performed for the glacier stream using these isotopes assuming the contribution of rain, groundwater and spring water contribution is negligible based on field studies and available literature. To validate the results obtained from hydrograph separation using above method, snow and ice melt ablation were measured using a network of bamboo stakes and snow pits. The δ¹⁸O and δD in rain samples range from -5.3% to -20.8% and -31.7% to -148.4% respectively. It is noteworthy to observe that the rain samples showed enriched values in the early season (July-August) and progressively get depleted at the end of the season (September). This could be due to the ‘amount effect’. Similarly, old snow samples have shown enriched isotopic values compared to fresh snow. This could because of the sublimation processes operating over the old surface snow. The δ¹⁸O and δD values in glacier ice samples range from -11.6% to -15.7% and -31.7% to -148.4%, whereas in a Sutri Dhaka meltwater stream, it ranges from -12.7% to -16.2% and -82.9% to -112.7% respectively. The mean deuterium excess (d-excess) value in all collected samples exceeds more than 16% which suggests the predominant moisture source of precipitation is from the Western Disturbances. Our detailed estimates of the hydrograph separation of Sutri Dhaka meltwater using isotope hydrograph separation and glaciological field methods agree within their uncertainty; stream meltwater budget is dominated by glaciers ice melt over snowmelt. The present study provides insights into the sources of moisture, controlling mechanism of the isotopic characteristics of Sutri Dhaka glacier water and helps in understanding the snow and ice melt components in Chandra basin, Western Himalaya.

Keywords: D-excess, hydrograph separation, Sutri Dhaka, stable water isotope, western Himalaya

Procedia PDF Downloads 152
510 Status of Sensory Profile Score among Children with Autism in Selected Centers of Dhaka City

Authors: Nupur A. D., Miah M. S., Moniruzzaman S. K.

Abstract:

Autism is a neurobiological disorder that affects physical, social, and language skills of a person. A child with autism feels difficulty for processing, integrating, and responding to sensory stimuli. Current estimates have shown that 45% to 96 % of children with Autism Spectrum Disorder demonstrate sensory difficulties. As autism is a worldwide burning issue, it has become a highly prioritized and important service provision in Bangladesh. The sensory deficit does not only hamper the normal development of a child, it also hampers the learning process and functional independency. The purpose of this study was to find out the prevalence of sensory dysfunction among children with autism and recognize common patterns of sensory dysfunction. A cross-sectional study design was chosen to carry out this research work. This study enrolled eighty children with autism and their parents by using the systematic sampling method. In this study, data were collected through the Short Sensory Profile (SSP) assessment tool, which consists of 38 items in the questionnaire, and qualified graduate Occupational Therapists were directly involved in interviewing parents as well as observing child responses to sensory related activities of the children with autism from four selected autism centers in Dhaka, Bangladesh. All item analyses were conducted to identify items yielding or resulting in the highest reported sensory processing dysfunction among those children through using SSP and Statistical Package for Social Sciences (SPSS) version 21.0 for data analysis. This study revealed that almost 78.25% of children with autism had significant sensory processing dysfunction based on their sensory response to relevant activities. Under-responsive sensory seeking and auditory filtering were the least common problems among them. On the other hand, most of them (95%) represented that they had definite to probable differences in sensory processing, including under-response or sensory seeking, auditory filtering, and tactile sensitivity. Besides, the result also shows that the definite difference in sensory processing among 64 children was within 100%; it means those children with autism suffered from sensory difficulties, and thus it drew a great impact on the children’s Daily Living Activities (ADLs) as well as social interaction with others. Almost 95% of children with autism require intervention to overcome or normalize the problem. The result gives insight regarding types of sensory processing dysfunction to consider during diagnosis and ascertaining the treatment. So, early sensory problem identification is very important and thus will help to provide appropriate sensory input to minimize the maladaptive behavior and enhance to reach the normal range of adaptive behavior.

Keywords: autism, sensory processing difficulties, sensory profile, occupational therapy

Procedia PDF Downloads 65
509 Spatial Distribution and Cluster Analysis of Sexual Risk Behaviors and STIs Reported by Chinese Adults in Guangzhou, China: A Representative Population-Based Study

Authors: Fangjing Zhou, Wen Chen, Brian J. Hall, Yu Wang, Carl Latkin, Li Ling, Joseph D. Tucker

Abstract:

Background: Economic and social reforms designed to open China to the world has been successful, but also appear to have rapidly laid the foundation for the reemergence of STIs since 1980s. Changes in sexual behaviors, relationships, and norms among Chinese contributed to the STIs epidemic. As the massive population moved during the last 30 years, early coital debut, multiple sexual partnerships, and unprotected sex have increased within the general population. Our objectives were to assess associations between residences location, sexual risk behaviors and sexually transmitted infections (STIs) among adults living in Guangzhou, China. Methods: Stratified cluster sampling followed a two-step process was used to select populations aged 18-59 years in Guangzhou, China. Spatial methods including Geographic Information Systems (GIS) were utilized to identify 1400 coordinates with latitude and longitude. Face-to-face household interviews were conducted to collect self-report data on sexual risk behaviors and diagnosed STIs. Kulldorff’s spatial scan statistic was implemented to identify and detect spatial distribution and clusters of sexual risk behaviors and STIs. The presence and location of statistically significant clusters were mapped in the study areas using ArcGIS software. Results: In this study, 1215 of 1400 households attempted surveys, with 368 refusals, resulting in a sample of 751 completed surveys. The prevalence of self-reported sexual risk behaviors was between 5.1% and 50.0%. The self-reported lifetime prevalence of diagnosed STIs was 7.06%. Anal intercourse clustered in an area located along the border within the rural-urban continuum (p=0.001). High rate clusters for alcohol or other drugs using before sex (p=0.008) and migrants who lived in Guangzhou less than one year (p=0.007) overlapped this cluster. Excess cases for sex without a condom (p=0.031) overlapped the cluster for college students (p<0.001). Conclusions: Short-term migrants and college students reported greater sexual risk behaviors. Programs to increase safer sex within these communities to reduce the risk of STIs are warranted in Guangzhou. Spatial analysis identified geographical clusters of sexual risk behaviors, which is critical for optimizing surveillance and targeting control measures for these locations in the future.

Keywords: cluster analysis, migrant, sexual risk behaviors, spatial distribution

Procedia PDF Downloads 340
508 Fuzzy Optimization for Identifying Anticancer Targets in Genome-Scale Metabolic Models of Colon Cancer

Authors: Feng-Sheng Wang, Chao-Ting Cheng

Abstract:

Developing a drug from conception to launch is costly and time-consuming. Computer-aided methods can reduce research costs and accelerate the development process during the early drug discovery and development stages. This study developed a fuzzy multi-objective hierarchical optimization framework for identifying potential anticancer targets in a metabolic model. First, RNA-seq expression data of colorectal cancer samples and their healthy counterparts were used to reconstruct tissue-specific genome-scale metabolic models. The aim of the optimization framework was to identify anticancer targets that lead to cancer cell death and evaluate metabolic flux perturbations in normal cells that have been caused by cancer treatment. Four objectives were established in the optimization framework to evaluate the mortality of cancer cells for treatment and to minimize side effects causing toxicity-induced tumorigenesis on normal cells and smaller metabolic perturbations. Through fuzzy set theory, a multiobjective optimization problem was converted into a trilevel maximizing decision-making (MDM) problem. The applied nested hybrid differential evolution was applied to solve the trilevel MDM problem using two nutrient media to identify anticancer targets in the genome-scale metabolic model of colorectal cancer, respectively. Using Dulbecco’s Modified Eagle Medium (DMEM), the computational results reveal that the identified anticancer targets were mostly involved in cholesterol biosynthesis, pyrimidine and purine metabolisms, glycerophospholipid biosynthetic pathway and sphingolipid pathway. However, using Ham’s medium, the genes involved in cholesterol biosynthesis were unidentifiable. A comparison of the uptake reactions for the DMEM and Ham’s medium revealed that no cholesterol uptake reaction was included in DMEM. Two additional media, i.e., a cholesterol uptake reaction was included in DMEM and excluded in HAM, were respectively used to investigate the relationship of tumor cell growth with nutrient components and anticancer target genes. The genes involved in the cholesterol biosynthesis were also revealed to be determinable if a cholesterol uptake reaction was not induced when the cells were in the culture medium. However, the genes involved in cholesterol biosynthesis became unidentifiable if such a reaction was induced.

Keywords: Cancer metabolism, genome-scale metabolic model, constraint-based model, multilevel optimization, fuzzy optimization, hybrid differential evolution

Procedia PDF Downloads 80
507 Modeling Standpipe Pressure Using Multivariable Regression Analysis by Combining Drilling Parameters and a Herschel-Bulkley Model

Authors: Seydou Sinde

Abstract:

The aims of this paper are to formulate mathematical expressions that can be used to estimate the standpipe pressure (SPP). The developed formulas take into account the main factors that, directly or indirectly, affect the behavior of SPP values. Fluid rheology and well hydraulics are some of these essential factors. Mud Plastic viscosity, yield point, flow power, consistency index, flow rate, drillstring, and annular geometries are represented by the frictional pressure (Pf), which is one of the input independent parameters and is calculated, in this paper, using Herschel-Bulkley rheological model. Other input independent parameters include the rate of penetration (ROP), applied load or weight on the bit (WOB), bit revolutions per minute (RPM), bit torque (TRQ), and hole inclination and direction coupled in the hole curvature or dogleg (DL). The technique of repeating parameters and Buckingham PI theorem are used to reduce the number of the input independent parameters into the dimensionless revolutions per minute (RPMd), the dimensionless torque (TRQd), and the dogleg, which is already in the dimensionless form of radians. Multivariable linear and polynomial regression technique using PTC Mathcad Prime 4.0 is used to analyze and determine the exact relationships between the dependent parameter, which is SPP, and the remaining three dimensionless groups. Three models proved sufficiently satisfactory to estimate the standpipe pressure: multivariable linear regression model 1 containing three regression coefficients for vertical wells; multivariable linear regression model 2 containing four regression coefficients for deviated wells; and multivariable polynomial quadratic regression model containing six regression coefficients for both vertical and deviated wells. Although that the linear regression model 2 (with four coefficients) is relatively more complex and contains an additional term over the linear regression model 1 (with three coefficients), the former did not really add significant improvements to the later except for some minor values. Thus, the effect of the hole curvature or dogleg is insignificant and can be omitted from the input independent parameters without significant losses of accuracy. The polynomial quadratic regression model is considered the most accurate model due to its relatively higher accuracy for most of the cases. Data of nine wells from the Middle East were used to run the developed models with satisfactory results provided by all of them, even if the multivariable polynomial quadratic regression model gave the best and most accurate results. Development of these models is useful not only to monitor and predict, with accuracy, the values of SPP but also to early control and check for the integrity of the well hydraulics as well as to take the corrective actions should any unexpected problems appear, such as pipe washouts, jet plugging, excessive mud losses, fluid gains, kicks, etc.

Keywords: standpipe, pressure, hydraulics, nondimensionalization, parameters, regression

Procedia PDF Downloads 84
506 Green Organic Chemistry, a New Paradigm in Pharmaceutical Sciences

Authors: Pesaru Vigneshwar Reddy, Parvathaneni Pavan

Abstract:

Green organic chemistry which is the latest and one of the most researched topics now-a- days has been in demand since 1990’s. Majority of the research in green organic chemistry chemicals are some of the important starting materials for greater number of major chemical industries. The production of organic chemicals has raw materials (or) reagents for other application is major sector of manufacturing polymers, pharmaceuticals, pesticides, paints, artificial fibers, food additives etc. organic synthesis on a large scale compound to the labratory scale, involves the use of energy, basic chemical ingredients from the petro chemical sectors, catalyst and after the end of the reaction, seperation, purification, storage, packing distribution etc. During these processes there are many problems of health and safety for workers in addition to the environmental problems caused there by use and deposition as waste. Green chemistry with its 12 principles would like to see changes in conventional way that were used for decades to make synthetic organic chemical and the use of less toxic starting materials. Green chemistry would like to increase the efficiency of synthetic methods, to use less toxic solvents, reduce the stage of synthetic routes and minimize waste as far as practically possible. In this way, organic synthesis will be part of the effort for sustainable development Green chemistry is also interested for research and alternatives innovations on many practical aspects of organic synthesis in the university and research labaratory of institutions. By changing the methodologies of organic synthesis, health and safety will be advanced in the small scale laboratory level but also will be extended to the industrial large scale production a process through new techniques. The three key developments in green chemistry include the use of super critical carbondioxide as green solvent, aqueous hydrogen peroxide as an oxidising agent and use of hydrogen in asymmetric synthesis. It also focuses on replacing traditional methods of heating with that of modern methods of heating like microwaves traditions, so that carbon foot print should reduces as far as possible. Another beneficiary of this green chemistry is that it will reduce environmental pollution through the use of less toxic reagents, minimizing of waste and more bio-degradable biproducts. In this present paper some of the basic principles, approaches, and early achievements of green chemistry has a branch of chemistry that studies the laws of passing of chemical reactions is also considered, with the summarization of green chemistry principles. A discussion about E-factor, old and new synthesis of ibuprofen, microwave techniques, and some of the recent advancements also considered.

Keywords: energy, e-factor, carbon foot print, micro-wave, sono-chemistry, advancement

Procedia PDF Downloads 306
505 DTI Connectome Changes in the Acute Phase of Aneurysmal Subarachnoid Hemorrhage Improve Outcome Classification

Authors: Sarah E. Nelson, Casey Weiner, Alexander Sigmon, Jun Hua, Haris I. Sair, Jose I. Suarez, Robert D. Stevens

Abstract:

Graph-theoretical information from structural connectomes indicated significant connectivity changes and improved acute prognostication in a Random Forest (RF) model in aneurysmal subarachnoid hemorrhage (aSAH), which can lead to significant morbidity and mortality and has traditionally been fraught by poor methods to predict outcome. This study’s hypothesis was that structural connectivity changes occur in canonical brain networks of acute aSAH patients, and that these changes are associated with functional outcome at six months. In a prospective cohort of patients admitted to a single institution for management of acute aSAH, patients underwent diffusion tensor imaging (DTI) as part of a multimodal MRI scan. A weighted undirected structural connectome was created of each patient’s images using Constant Solid Angle (CSA) tractography, with 176 regions of interest (ROIs) defined by the Johns Hopkins Eve atlas. ROIs were sorted into four networks: Default Mode Network, Executive Control Network, Salience Network, and Whole Brain. The resulting nodes and edges were characterized using graph-theoretic features, including Node Strength (NS), Betweenness Centrality (BC), Network Degree (ND), and Connectedness (C). Clinical (including demographics and World Federation of Neurologic Surgeons scale) and graph features were used separately and in combination to train RF and Logistic Regression classifiers to predict two outcomes: dichotomized modified Rankin Score (mRS) at discharge and at six months after discharge (favorable outcome mRS 0-2, unfavorable outcome mRS 3-6). A total of 56 aSAH patients underwent DTI a median (IQR) of 7 (IQR=8.5) days after admission. The best performing model (RF) combining clinical and DTI graph features had a mean Area Under the Receiver Operator Characteristic Curve (AUROC) of 0.88 ± 0.00 and Area Under the Precision Recall Curve (AUPRC) of 0.95 ± 0.00 over 500 trials. The combined model performed better than the clinical model alone (AUROC 0.81 ± 0.01, AUPRC 0.91 ± 0.00). The highest-ranked graph features for prediction were NS, BC, and ND. These results indicate reorganization of the connectome early after aSAH. The performance of clinical prognostic models was increased significantly by the inclusion of DTI-derived graph connectivity metrics. This methodology could significantly improve prognostication of aSAH.

Keywords: connectomics, diffusion tensor imaging, graph theory, machine learning, subarachnoid hemorrhage

Procedia PDF Downloads 189
504 Understanding the Role of Nitric Oxide Synthase 1 in Low-Density Lipoprotein Uptake by Macrophages and Implication in Atherosclerosis Progression

Authors: Anjali Roy, Mirza S. Baig

Abstract:

Atherosclerosis is a chronic inflammatory disease characterized by the formation of lipid rich plaque enriched with necrotic core, modified lipid accumulation, smooth muscle cells, endothelial cells, leucocytes and macrophages. Macrophage foam cells play a critical role in the occurrence and development of inflammatory atherosclerotic plaque. Foam cells are the fat-laden macrophages in the initial stage atherosclerotic lesion formation. Foam cells are an indication of plaque build-up, or atherosclerosis, which is commonly associated with increased risk of heart attack and stroke as a result of arterial narrowing and hardening. The mechanisms that drive atherosclerotic plaque progression remain largely unknown. Dissecting the molecular mechanism involved in process of macrophage foam cell formation will help to develop therapeutic interventions for atherosclerosis. To investigate the mechanism, we studied the role of nitric oxide synthase 1(NOS1)-mediated nitric oxide (NO) on low-density lipoprotein (LDL) uptake by bone marrow derived macrophages (BMDM). Using confocal microscopy, we found that incubation of macrophages with NOS1 inhibitor, TRIM (1-(2-Trifluoromethylphenyl) imidazole) or L-NAME (N omega-nitro-L-arginine methyl ester) prior to LDL treatment significantly reduces the LDL uptake by BMDM. Further, addition of NO donor (DEA NONOate) in NOS1 inhibitor treated macrophages recovers the LDL uptake. Our data strongly suggest that NOS1 derived NO regulates LDL uptake by macrophages and foam cell formation. Moreover, we also checked proinflammatory cytokine mRNA expression through real time PCR in BMDM treated with LDL and copper oxidized LDL (OxLDL) in presences and absences of inhibitor. Normal LDL does not evoke cytokine expression whereas OxLDL induced proinflammatory cytokine expression which significantly reduced in presences of NOS1 inhibitor. Rapid NOS-1-derived NO and its stable derivative formation act as signaling agents for inducible NOS-2 expression in endothelial cells, leading to endothelial vascular wall lining disruption and dysfunctioning. This study highlights the role of NOS1 as critical players of foam cell formation and would reveal much about the key molecular proteins involved in atherosclerosis. Thus, targeting NOS1 would be a useful strategy in reducing LDL uptake by macrophages at early stage of disease and hence dampening the atherosclerosis progression.

Keywords: atherosclerosis, NOS1, inflammation, oxidized LDL

Procedia PDF Downloads 127
503 Immuno-Protective Role of Mucosal Delivery of Lactococcus lactis Expressing Functionally Active JlpA Protein on Campylobacter jejuni Colonization in Chickens

Authors: Ankita Singh, Chandan Gorain, Amirul I. Mallick

Abstract:

Successful adherence of the mucosal epithelial cells is the key early step for Campylobacter jejuni pathogenesis (C. jejuni). A set of Surface Exposed Colonization Proteins (SECPs) are among the major factors involved in host cell adherence and invasion of C. jejuni. Among them, constitutively expressed surface-exposed lipoprotein adhesin of C. jejuni, JlpA, interacts with intestinal heat shock protein 90 (hsp90α) and contributes in disease progression by triggering pro-inflammatory response via activation of NF-κB and p38 MAP kinase pathway. Together with its ability to express in the bacterial surface, higher sequence conservation and predicted predominance of several B cells epitopes, JlpA protein reserves its potential to become an effective vaccine candidate against wide range of Campylobacter sps including C. jejuni. Given that chickens are the primary sources for C. jejuni and persistent gut colonization remain as major cause for foodborne pathogenesis to humans, present study explicitly used chickens as model to test the immune-protective efficacy of JlpA protein. Taking into account that gastrointestinal tract is the focal site for C. jejuni colonization, to extrapolate the benefit of mucosal (intragastric) delivery of JlpA protein, a food grade Nisin inducible Lactic acid producing bacteria, Lactococcus lactis (L. lactis) was engineered to express recombinant JlpA protein (rJlpA) in the surface of the bacteria. Following evaluation of optimal surface expression and functionality of recombinant JlpA protein expressed by recombinant L. lactis (rL. lactis), the immune-protective role of intragastric administration of live rL. lactis was assessed in commercial broiler chickens. In addition to the significant elevation of antigen specific mucosal immune responses in the intestine of chickens that received three doses of rL. lactis, marked upregulation of Toll-like receptor 2 (TLR2) gene expression in association with mixed pro-inflammatory responses (both Th1 and Th17 type) was observed. Furthermore, intragastric delivery of rJlpA expressed by rL. lactis, but not the injectable form, resulted in a significant reduction in C. jejuni colonization in chickens suggesting that mucosal delivery of live rL. lactis expressing JlpA serves as a promising vaccine platform to induce strong immune-protective responses against C. jejuni in chickens.

Keywords: chickens, lipoprotein adhesion of Campylobacter jejuni, immuno-protection, Lactococcus lactis, mucosal delivery

Procedia PDF Downloads 139
502 Research on the Optimization of Satellite Mission Scheduling

Authors: Pin-Ling Yin, Dung-Ying Lin

Abstract:

Satellites play an important role in our daily lives, from monitoring the Earth's environment and providing real-time disaster imagery to predicting extreme weather events. As technology advances and demands increase, the tasks undertaken by satellites have become increasingly complex, with more stringent resource management requirements. A common challenge in satellite mission scheduling is the limited availability of resources, including onboard memory, ground station accessibility, and satellite power. In this context, efficiently scheduling and managing the increasingly complex satellite missions under constrained resources has become a critical issue that needs to be addressed. The core of Satellite Onboard Activity Planning (SOAP) lies in optimizing the scheduling of the received tasks, arranging them on a timeline to form an executable onboard mission plan. This study aims to develop an optimization model that considers the various constraints involved in satellite mission scheduling, such as the non-overlapping execution periods for certain types of tasks, the requirement that tasks must fall within the contact range of specified types of ground stations during their execution, onboard memory capacity limits, and the collaborative constraints between different types of tasks. Specifically, this research constructs a mixed-integer programming mathematical model and solves it with a commercial optimization package. Simultaneously, as the problem size increases, the problem becomes more difficult to solve. Therefore, in this study, a heuristic algorithm has been developed to address the challenges of using commercial optimization package as the scale increases. The goal is to effectively plan satellite missions, maximizing the total number of executable tasks while considering task priorities and ensuring that tasks can be completed as early as possible without violating feasibility constraints. To verify the feasibility and effectiveness of the algorithm, test instances of various sizes were generated, and the results were validated through feedback from on-site users and compared against solutions obtained from a commercial optimization package. Numerical results show that the algorithm performs well under various scenarios, consistently meeting user requirements. The satellite mission scheduling algorithm proposed in this study can be flexibly extended to different types of satellite mission demands, achieving optimal resource allocation and enhancing the efficiency and effectiveness of satellite mission execution.

Keywords: mixed-integer programming, meta-heuristics, optimization, resource management, satellite mission scheduling

Procedia PDF Downloads 25
501 The Environmental Impact Assessment of Land Use Planning (Case Study: Tannery Industry in Al-Garma District)

Authors: Husam Abdulmuttaleb Hashim

Abstract:

The environmental pollution problems represent a great challenge to the world, threatening to destroy all the evolution that mankind has reached, the organizations and associations that cares about environment are trying to warn the world from the forthcoming danger resulted from excessive use of nature resources and consuming it without looking to the damage happened as a result of unfair use of it. Most of the urban centers suffers from the environmental pollution problems and health, economic, and social dangers resulted from this pollution, and while the land use planning is responsible for distributing different uses in urban centers and controlling the interactions between these uses to reach a homogeneous and perfect state for the different activities in cities, the occurrence of environmental problems in the shade of existing land use planning operation refers to the disorder or insufficiency in this operation which leads to presence of such problems, and this disorder lays in lack of sufficient importance to the environmental considerations during the land use planning operations and setting up the master plan, so the research start to study this problem and finding solutions for it, the research assumes that using accurate and scientific methods in early stages of land use planning operation will prevent occurring of environmental pollution problems in the future, the research aims to study and show the importance of the environmental impact assessment method (EIA) as an important planning tool to investigate and predict the pollution ranges of the land use that has a polluting pattern in land use planning operation. This research encompasses the concept of environmental assessment and its kinds and clarifies environmental impact assessment and its contents, the research also dealt with urban planning concept and land use planning, it also dealt with the current situation of the case study (Al-Garma district) and the land use planning in it and explain the most polluting use on the environment which is the industrial land use represented in the tannery industries and then there was a stating of current situation of this land use and explaining its contents and environmental impacts resulted from it, and then we analyzed the tests applied by the researcher for water and soil, and perform environmental evaluation through applying environmental impact assessment matrix using the direct method to reveal the pollution ranges on the ambient environment of industrial land use, and we also applied the environmental and site limits and standards by using (GIS) and (AUTOCAD) to select the site of the best alternative of the industrial region in Al-Garma district after the research approved the unsuitability of its current site location for the environmental and site limitations, the research conducted some conclusions and recommendations regard clarifying the concluded facts and to set the proper solutions.

Keywords: EIA, pollution, tannery industry, land use planning

Procedia PDF Downloads 449
500 Antimicrobial and Antibiofilm Properties of Fatty Acids Against Streptococcus Mutans

Authors: A. Mulry, C. Kealey, D. B. Brady

Abstract:

Planktonic bacteria can form biofilms which are microbial aggregates embedded within a matrix of extracellular polymeric substances (EPS). They can be found attached to abiotic or biotic surfaces. Biofilms are responsible for oral diseases such as dental caries, gingivitis and the progression of periodontal disease. Biofilms can resist 500 to 1000 times the concentration of biocides and antibiotics used to kill planktonic bacteria. Biofilm development on oral surfaces involves four stages, initial attachment, early development, maturation and dispersal of planktonic cells. The Minimum Inhibitory Concentration (MIC) was determined using a range of saturated and unsaturated fatty acids using the resazurin assay, followed by serial dilution and spot plating on BHI agar plates to establish the Minimum Bactericidal Concentration (MBC). Log reduction of bacteria was also evaluated for each fatty acid. The Minimum Biofilm Inhibition Concentration (MBIC) was determined using crystal violet assay in 96 well plates on forming and pre-formed S. mutans biofilms using BHI supplemented with 1% sucrose. Saturated medium-chain fatty acids Octanoic (C8.0), Decanoic (C10.0) and Undecanoic acid (C11.0) do not display strong antibiofilm properties; however, Lauric (C12.0) and Myristic (C14.0) display moderate antibiofilm properties with 97.83% and 97.5% biofilm inhibition with 1000 µM respectively. Monounsaturated, Oleic acid (C18.1) and polyunsaturated large chain fatty acids, Linoleic acid (C18.2) display potent antibiofilm properties with biofilm inhibition of 99.73% at 125 µM and 100% at 65.5 µM, respectively. Long-chain polyunsaturated Omega-3 fatty acids α-Linoleic (C18.3), Eicosapentaenoic Acid (EPA) (C20.5), Docosahexaenoic Acid (DHA) (C22.6) have displayed strong antibiofilm efficacy from concentrations ranging from 31.25-250µg/ml. DHA is the most promising antibiofilm agent with an MBIC of 99.73% with 15.625µg/ml. This may be due to the presence of six double bonds and the structural orientation of the fatty acid. To conclude, fatty acids displaying the most antimicrobial activity appear to be medium or long-chain unsaturated fatty acids containing one or more double bonds. Most promising agents include Omega-3-fatty acids Linoleic, α-Linoleic, EPA and DHA, as well as Omega-9 fatty acid Oleic acid. These results indicate that fatty acids have the potential to be used as antimicrobials and antibiofilm agents against S. mutans. Future work involves further screening of the most potent fatty acids against a range of bacteria, including Gram-positive and Gram-negative oral pathogens. Future work will involve incorporating the most effective fatty acids onto dental implant devices to prevent biofilm formation.

Keywords: antibiofilm, biofilm, fatty acids, S. mutans

Procedia PDF Downloads 157
499 Prediction of Sound Transmission Through Framed Façade Systems

Authors: Fangliang Chen, Yihe Huang, Tejav Deganyar, Anselm Boehm, Hamid Batoul

Abstract:

With growing population density and further urbanization, the average noise level in cities is increasing. Excessive noise is not only annoying but also leads to a negative impact on human health. To deal with the increasing city noise, environmental regulations bring up higher standards on acoustic comfort in buildings by mitigating the noise transmission from building envelope exterior to interior. Framed window, door and façade systems are the leading choice for modern fenestration construction, which provides demonstrated quality of weathering reliability, environmental efficiency, and installation ease. The overall sound insulation of such systems depends both on glasses and frames, where glass usually covers the majority of the exposed surfaces, thus it is the main source of sound energy transmission. While frames in modern façade systems become slimmer for aesthetic appearance, which contribute to a minimal percentage of exposed surfaces. Nevertheless, frames might provide substantial transmission paths for sound travels through because of much less mass crossing the path, thus becoming more critical in limiting the acoustic performance of the whole system. There are various methodologies and numerical programs that can accurately predict the acoustic performance of either glasses or frames. However, due to the vast variance of size and dimension between frame and glass in the same system, there is no satisfactory theoretical approach or affordable simulation tool in current practice to access the over acoustic performance of a whole façade system. For this reason, laboratory test turns out to be the only reliable source. However, laboratory test is very time consuming and high costly, moreover different lab might provide slightly different test results because of varieties of test chambers, sample mounting, and test operations, which significantly constrains the early phase design of framed façade systems. To address this dilemma, this study provides an effective analytical methodology to predict the acoustic performance of framed façade systems, based on vast amount of acoustic test results on glass, frame and the whole façade system consist of both. Further test results validate the current model is able to accurately predict the overall sound transmission loss of a framed system as long as the acoustic behavior of the frame is available. Though the presented methodology is mainly developed from façade systems with aluminum frames, it can be easily extended to systems with frames of other materials such as steel, PVC or wood.

Keywords: city noise, building facades, sound mitigation, sound transmission loss, framed façade system

Procedia PDF Downloads 61
498 Thermal and Visual Comfort Assessment in Office Buildings in Relation to Space Depth

Authors: Elham Soltani Dehnavi

Abstract:

In today’s compact cities, bringing daylighting and fresh air to buildings is a significant challenge, but it also presents opportunities to reduce energy consumption in buildings by reducing the need for artificial lighting and mechanical systems. Simple adjustments to building form can contribute to their efficiency. This paper examines how the relationship between the width and depth of the rooms in office buildings affects visual and thermal comfort, and consequently energy savings. Based on these evaluations, we can determine the best location for sedentary areas in a room. We can also propose improvements to occupant experience and minimize the difference between the predicted and measured performance in buildings by changing other design parameters, such as natural ventilation strategies, glazing properties, and shading. This study investigates the condition of spatial daylighting and thermal comfort for a range of room configurations using computer simulations, then it suggests the best depth for optimizing both daylighting and thermal comfort, and consequently energy performance in each room type. The Window-to-Wall Ratio (WWR) is 40% with 0.8m window sill and 0.4m window head. Also, there are some fixed parameters chosen according to building codes and standards, and the simulations are done in Seattle, USA. The simulation results are presented as evaluation grids using the thresholds for different metrics such as Daylight Autonomy (DA), spatial Daylight Autonomy (sDA), Annual Sunlight Exposure (ASE), and Daylight Glare Probability (DGP) for visual comfort, and Predicted Mean Vote (PMV), Predicted Percentage of Dissatisfied (PPD), occupied Thermal Comfort Percentage (occTCP), over-heated percent, under-heated percent, and Standard Effective Temperature (SET) for thermal comfort that are extracted from Grasshopper scripts. The simulation tools are Grasshopper plugins such as Ladybug, Honeybee, and EnergyPlus. According to the results, some metrics do not change much along the room depth and some of them change significantly. So, we can overlap these grids in order to determine the comfort zone. The overlapped grids contain 8 metrics, and the pixels that meet all 8 mentioned metrics’ thresholds define the comfort zone. With these overlapped maps, we can determine the comfort zones inside rooms and locate sedentary areas there. Other parts can be used for other tasks that are not used permanently or need lower or higher amounts of daylight and thermal comfort is less critical to user experience. The results can be reflected in a table to be used as a guideline by designers in the early stages of the design process.

Keywords: occupant experience, office buildings, space depth, thermal comfort, visual comfort

Procedia PDF Downloads 183
497 Cai Guo-Qiang: A Chinese Artist at the Cutting-Edge of Global Art

Authors: Marta Blavia

Abstract:

Magiciens de la terre, organized in 1989 by the Centre Pompidou, became 'the first worldwide exhibition of contemporary art' by presenting artists from Western and non-Western countries, including three Chinese artists. For the first time, West turned its eyes to other countries not as exotic sources of inspiration, but as places where contemporary art was also being created. One year later, Chine: demain pour hier was inaugurated as the first Chinese avant-garde group-exhibition in Occident. Among the artists included was Cai Guo-Qiang who, like many other Chinese artists, had left his home country in the eighties in pursuit of greater creative freedom. By exploring artistic non-Western perspectives, both landmark exhibitions questioned the predominance of the Eurocentric vision in the construction of history art. But more than anything else, these exhibitions laid the groundwork for the rise of the so-called phenomenon 'global contemporary art'. All the same time, 1989 also was a turning point in Chinese art history. Because of the Tiananmen student protests, The Chinese government undertook a series of measures to cut down any kind of avant-garde artistic activity after a decade of a relative openness. During the eighties, and especially after the Tiananmen crackdown, some important artists began to leave China to move overseas such as Xu Bing and Ai Weiwei (USA); Chen Zhen and Huang Yong Ping (France); or Cai Guo-Qiang (Japan). After emigrating abroad, Chinese overseas artists began to develop projects in accordance with their new environments and audiences as well as to appear in numerous international exhibitions. With their creations, that moved freely between a variety of Eastern and Western art sources, these artists were crucial agents in the emergence of global contemporary art. As other Chinese artists overseas, Cai Guo-Qiang’s career took off during the 1990s and early 2000s right at the same moment in which Western art world started to look beyond itself. Little by little, he developed a very personal artistic language that redefines Chinese ideas, symbols, and traditional materials in a new world order marked by globalization. Cai Guo-Qiang participated in many of the exhibitions that contributed to shape global contemporary art: Encountering the Others (1992); the 45th Venice Biennale (1993); Inside Out: New Chinese Art (1997), or the 48th Venice Biennale (1999), where he recreated the Chinese monumental social realist work Rent Collection Courtyard that earned him the Golden Lion Award. By examining the different stages of Cai Guo-Qiang’s artistic path as well as the transnational dimensions of his creations, this paper aims at offering a comprehensive survey on the construction of the discourse of global contemporary art.

Keywords: Cai Guo-Qiang, Chinese artists overseas, emergence global art, transnational art

Procedia PDF Downloads 284
496 An Evolutionary Approach for QAOA for Max-Cut

Authors: Francesca Schiavello

Abstract:

This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.

Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization

Procedia PDF Downloads 60
495 Radiation Induced DNA Damage and Its Modification by Herbal Preparation of Hippophae rhamnoides L. (SBL-1): An in vitro and in vivo Study in Mice

Authors: Anuranjani Kumar, Madhu Bala

Abstract:

Ionising radiation exposure induces generation of free radicals and the oxidative DNA damage. SBL-1, a radioprotective leaf extract prepared from leaves Hippophae rhamnoides L. (Common name; Seabuckthorn), showed > 90% survival in mice population that was treated with lethal dose (10 Gy) of ⁶⁰Co gamma irradiation. In this study, early effects of pre-treatment with or without SBL-1 in blood peripheral blood lymphocytes (PBMCs) were investigated by cell viability assays (trypan blue and MTT). The quantitative in vitro study of Hoescht/PI staining was performed to check the apoptosis/necrosis in PBMCs irradiated at 2 Gy with or without pretreatment of SBL-1 (at different concentrations) up to 24 and 48h. Comet assay was performed in vivo, to detect the DNA strands breaks and its repair mechanism on peripheral blood lymphocytes at lethal dose (10 Gy). For this study, male mice (wt. 28 ± 2g) were administered radioprotective dose (30mg/kg body weight) of SBL-1, 30 min prior to irradiation. Animals were sacrificed at 24h and 48h. Blood was drawn through cardiac puncture, and blood lymphocytes were separated using histopaque column. Both neutral and alkaline comet assay were performed using standardized technique. In irradiated animals, alkaline comet assay revealed single strand breaks (SSBs) that showed significant (p < 0.05) increase in percent DNA in tail and Olive tail moment (OTM) at 24 h while at 48h the percent DNA in tail further increased significantly (p < 0.02). The double strands breaks (DSBs) increased significantly (p < 0.01) at 48 h in neutral assay, in comparison to untreated control. The animals pre-treated with SBL-1 before irradiation showed significantly (p < 0.05) less DSBs at 48 h treatment in comparison to irradiated group of animals. The SBL-1 alone treated group itself showed no toxicity. The antioxidant potential of SBL-1 were also investigated by in vitro biochemical assays such as DPPH (p < 0.05), ABTS, reducing ability (p < 0.09), hydroxyl radical scavenging (p < 0.05), ferric reducing antioxidant power (FRAP), superoxide radical scavenging activity (p < 0.05), hydrogen peroxide scavenging activity (p < 0.05) etc. SBL-1 showed strong free radical scavenging power that plays important role in the studies of radiation-induced injuries. The SBL-1 treated PBMCs showed significant (p < 0.02) viability in trypan blue assay at 24-hour incubation.

Keywords: radiation, SBL-1, SSBs, DSBs, FRAP, PBMCs

Procedia PDF Downloads 154
494 Improving Screening and Treatment of Binge Eating Disorders in Pediatric Weight Management Clinic through a Quality Improvement Framework

Authors: Cristina Fernandez, Felix Amparano, John Tumberger, Stephani Stancil, Sarah Hampl, Brooke Sweeney, Amy R. Beck, Helena H Laroche, Jared Tucker, Eileen Chaves, Sara Gould, Matthew Lindquist, Lora Edwards, Renee Arensberg, Meredith Dreyer, Jazmine Cedeno, Alleen Cummins, Jennifer Lisondra, Katie Cox, Kelsey Dean, Rachel Perera, Nicholas A. Clark

Abstract:

Background: Adolescents with obesity are at higher risk of disordered eating than the general population. Detection of eating disorders (ED) is difficult. Screening questionnaires may aid in early detection of ED. Our team’s prior efforts focused on increasing ED screening rates to ≥90% using a validated 10-question adolescent binge eating disorder screening questionnaire (ADO-BED). This aim was achieved. We then aimed to improve treatment plan initiation of patients ≥12 years of age who screen positive for BED within our WMC from 33% to 70% within 12 months. Methods: Our WMC is within a tertiary-care, free-standing children’s hospital. A3, an improvement framework, was used. A multidisciplinary team (physicians, nurses, registered dietitians, psychologists, and exercise physiologists) was created. The outcome measure was documentation of treatment plan initiation of those who screen positive (goal 70%). The process measure was ADO-BED screening rate of WMC patients (goal ≥90%). Plan-Do-Study-Act (PDSA) cycle 1 included provider education on current literature and treatment plan initiation based upon ADO-BED responses. PDSA 2 involved increasing documentation of treatment plan and retrain process to providers. Pre-defined treatment plans were: 1) repeat screen in 3-6 months, 2) resources provided only, or 3) comprehensive multidisciplinary weight management team evaluation. Run charts monitored impact over time. Results: Within 9 months, 166 patients were seen in WMC. Process measure showed sustained performance above goal (mean 98%). Outcome measure showed special cause improvement from mean of 33% to 100% (n=31). Of treatment plans provided, 45% received Plan 1, 4% Plan 2, and 46% Plan 3. Conclusion: Through a multidisciplinary improvement team approach, we maintained sustained ADO-BED screening performance, and, prior to our 12-month timeline, achieved our project aim. Our efforts may serve as a model for other multidisciplinary WMCs. Next steps may include expanding project scope to other WM programs.

Keywords: obesity, pediatrics, clinic, eating disorder

Procedia PDF Downloads 63
493 The Risk of Bleeding in Knee or Shoulder Injections in Patients on Warfarin Treatment

Authors: Muhammad Yasir Tarar

Abstract:

Background: Intraarticular steroid injections are an effective option in alleviating the symptoms of conditions like osteoarthritis, rheumatoid arthritis, crystal arthropathy, and rotator cuff tendinopathy. Most of these injections are conducted in the elderly who are on polypharmacy, including anticoagulants at times. Up to 6% of patients aged 80-84 years have been reported to be taking Warfarin. The literature availability on safety quotient for patients undergoing intraarticular injections on Warfarin is scarce. It has remained debatable over the years which approach is safe for these patients. Continuing warfarin has a theoretical bleeding risk, and stopping it can lead to even severe life-threatening thromboembolic events in high-risk patients. Objectives: To evaluate the risk of bleeding complications in patients on warfarin undergoing intraarticular injections or arthrocentesis. Study Design & Methods: A literature search of MEDLINE (1946 to present), EMBASE (1974 to present), and Cochrane CENTRAL (1988 to present) databases were conducted using any combination of the keywords, Injection, Knee, Shoulder, Joint, Intraarticular, arthrocentesis, Warfarin, and Anticoagulation in November 2020 for articles published in any language with no publication year limit. The study inclusion criteria included reporting on the rate of bleeding complications following injection of the knee or shoulder in patients on warfarin treatment. Randomized control trials and prospective and retrospective study designs were included. An electronic standardized Performa for data extraction was made. The Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) the methodology was used. The articles were appraised using the methodological index for nonrandomized studies. The Cochrane Risk of Bias Tool used to assess the risk of bias in included RCTs and the MINORS tool for assessment of bias in observational studies. Results: The search of databases resulted in a total of 852 articles. Relevant articles as per the inclusion criteria were shortlisted, 7 articles deemed suitable to be include. A total of 1033 joints sample size was undertaken with specified knee and shoulder joints of a total of 820. Only 6 joints had bleeding complications, 5 early bleeding at the time of injection or aspiration, and one late bleeding complication with INR of 5, additionally, 2 patients complained of bruising, 3 of pain, and 1 managed for infection. Conclusions: The results of the metanalysis show that it is relatively safe to perform intraarticular injections in patients on Warfarin regardless of the INR range.

Keywords: arthrocentesis, warfarin, bleeding, injection

Procedia PDF Downloads 77
492 Noninvasive Technique for Measurement of Heartbeat in Zebrafish Embryos Exposed to Electromagnetic Fields at 27 GHz

Authors: Sara Ignoto, Elena M. Scalisi, Carmen Sica, Martina Contino, Greta Ferruggia, Antonio Salvaggio, Santi C. Pavone, Gino Sorbello, Loreto Di Donato, Roberta Pecoraro, Maria V. Brundo

Abstract:

The new fifth generation technology (5G), which should favor high data-rate connections (1Gbps) and latency times lower than the current ones (<1ms), has the characteristic of working on different frequency bands of the radio wave spectrum (700 MHz, 3.6-3.8 GHz and 26.5-27.5 GHz), thus also exploiting higher frequencies than previous mobile radio generations (1G-4G). The higher frequency waves, however, have a lower capacity to propagate in free space and therefore, in order to guarantee the capillary coverage of the territory for high reliability applications, it will be necessary to install a large number of repeaters. Following the introduction of this new technology, there has been growing concern in recent years about the possible harmful effects on human health and several studies were published using several animal models. This study aimed to observe the possible short-term effects induced by 5G-millimeter waves on heartbeat of early life stages of Danio rerio using DanioScope software (Noldus). DanioScope is the complete toolbox for measurements on zebrafish embryos and larvae. The effect of substances can be measured on the developing zebrafish embryo by a range of parameters: earliest activity of the embryo’s tail, activity of the developing heart, speed of blood flowing through the vein, length and diameters of body parts. Activity measurements, cardiovascular data, blood flow data and morphometric parameters can be combined in one single tool. Obtained data are elaborate and provided by the software both numerical as well as graphical. The experiments were performed at 27 GHz by a no commercial high gain pyramidal horn antenna. According to OECD guidelines, exposure to 5G-millimeter waves was tested by fish embryo toxicity test within 96 hours post fertilization, Observations were recorded every 24h, until the end of the short-term test (96h). The results have showed an increase of heartbeat rate on exposed embryos at 48h hpf than control group, but this increase has not been shown at 72-96 h hpf. Nowadays, there is a scant of literature data about this topic, so these results could be useful to approach new studies and also to evaluate potential cardiotoxic effects of mobile radiofrequency.

Keywords: Danio rerio, DanioScope, cardiotoxicity, millimeter waves.

Procedia PDF Downloads 163
491 Annexing the Strength of Information and Communication Technology (ICT) for Real-time TB Reporting Using TB Situation Room (TSR) in Nigeria: Kano State Experience

Authors: Ibrahim Umar, Ashiru Rajab, Sumayya Chindo, Emmanuel Olashore

Abstract:

INTRODUCTION: Kano is the most populous state in Nigeria and one of the two states with the highest TB burden in the country. The state notifies an average of 8,000+ TB cases quarterly and has the highest yearly notification of all the states in Nigeria from 2020 to 2022. The contribution of the state TB program to the National TB notification varies from 9% to 10% quarterly between the first quarter of 2022 and second quarter of 2023. The Kano State TB Situation Room is an innovative platform for timely data collection, collation and analysis for informed decision in health system. During the 2023 second National TB Testing week (NTBTW) Kano TB program aimed at early TB detection, prevention and treatment. The state TB Situation room provided avenue to the state for coordination and surveillance through real time data reporting, review, analysis and use during the NTBTW. OBJECTIVES: To assess the role of innovative information and communication technology platform for real-time TB reporting during second National TB Testing week in Nigeria 2023. To showcase the NTBTW data cascade analysis using TSR as innovative ICT platform. METHODOLOGY: The State TB deployed a real-time virtual dashboard for NTBTW reporting, analysis and feedback. A data room team was set up who received realtime data using google link. Data received was analyzed using power BI analytic tool with statistical alpha level of significance of <0.05. RESULTS: At the end of the week-long activity and using the real-time dashboard with onsite mentorship of the field workers, the state TB program was able to screen a total of 52,054 people were screened for TB from 72,112 individuals eligible for screening (72% screening rate). A total of 9,910 presumptive TB clients were identified and evaluated for TB leading to diagnosis of 445 TB patients with TB (5% yield from presumptives) and placement of 435 TB patients on treatment (98% percentage enrolment). CONCLUSION: The TB Situation Room (TBSR) has been a great asset to Kano State TB Control Program in meeting up with the growing demand for timely data reporting in TB and other global health responses. The use of real time surveillance data during the 2023 NTBTW has in no small measure improved the TB response and feedback in Kano State. Scaling up this intervention to other disease areas, states and nations is a positive step in the right direction towards global TB eradication.

Keywords: tuberculosis (tb), national tb testing week (ntbtw), tb situation rom (tsr), information communication technology (ict)

Procedia PDF Downloads 71
490 Colored Image Classification Using Quantum Convolutional Neural Networks Approach

Authors: Farina Riaz, Shahab Abdulla, Srinjoy Ganguly, Hajime Suzuki, Ravinesh C. Deo, Susan Hopkins

Abstract:

Recently, quantum machine learning has received significant attention. For various types of data, including text and images, numerous quantum machine learning (QML) models have been created and are being tested. Images are exceedingly complex data components that demand more processing power. Despite being mature, classical machine learning still has difficulties with big data applications. Furthermore, quantum technology has revolutionized how machine learning is thought of, by employing quantum features to address optimization issues. Since quantum hardware is currently extremely noisy, it is not practicable to run machine learning algorithms on it without risking the production of inaccurate results. To discover the advantages of quantum versus classical approaches, this research has concentrated on colored image data. Deep learning classification models are currently being created on Quantum platforms, but they are still in a very early stage. Black and white benchmark image datasets like MNIST and Fashion MINIST have been used in recent research. MNIST and CIFAR-10 were compared for binary classification, but the comparison showed that MNIST performed more accurately than colored CIFAR-10. This research will evaluate the performance of the QML algorithm on the colored benchmark dataset CIFAR-10 to advance QML's real-time applicability. However, deep learning classification models have not been developed to compare colored images like Quantum Convolutional Neural Network (QCNN) to determine how much it is better to classical. Only a few models, such as quantum variational circuits, take colored images. The methodology adopted in this research is a hybrid approach by using penny lane as a simulator. To process the 10 classes of CIFAR-10, the image data has been translated into grey scale and the 28 × 28-pixel image containing 10,000 test and 50,000 training images were used. The objective of this work is to determine how much the quantum approach can outperform a classical approach for a comprehensive dataset of color images. After pre-processing 50,000 images from a classical computer, the QCNN model adopted a hybrid method and encoded the images into a quantum simulator for feature extraction using quantum gate rotations. The measurements were carried out on the classical computer after the rotations were applied. According to the results, we note that the QCNN approach is ~12% more effective than the traditional classical CNN approaches and it is possible that applying data augmentation may increase the accuracy. This study has demonstrated that quantum machine and deep learning models can be relatively superior to the classical machine learning approaches in terms of their processing speed and accuracy when used to perform classification on colored classes.

Keywords: CIFAR-10, quantum convolutional neural networks, quantum deep learning, quantum machine learning

Procedia PDF Downloads 129
489 In Support of Sustainable Water Resources Development in the Lower Mekong River Basin: Development of Guidelines for Transboundary Environmental Impact Assessment

Authors: Kongmeng Ly

Abstract:

The management of transboundary river basins across developing countries, such as the Lower Mekong River Basin (LMB), is frequently challenging given the development and conservation divergences of the basin countries. Driven by needs to sustain economic performance and reduce poverty, the LMB countries (Cambodia, Lao PDR, Thailand, Viet Nam) are embarking on significant land use changes in the form hydropower dam, to fulfill their energy requirements. This pathway could lead to irreversible changes to the ecosystem of the Mekong River, if not properly managed. Given the uncertain trade-offs of hydropower development and operation, the Lower Mekong River Basin Countries through the technical support of the Mekong River Commission (MRC) Secretariat embarked on decade long the development of Technical Guidelines for Transboundary Environmental Impact Assessment. Through a series of workshops, seminars, national and regional consultations, and pilot studies and further development following the recommendations generated through legal and institutional reviews undertaken over two decades period, the LMB Countries jointly adopted the MRC Technical Guidelines for Transboundary Environmental Impact Assessment (TbEIA Guidelines). These guidelines were developed with particular regard to the experience gained from MRC supported consultations and technical reviews of the Xayaburi Dam Project, Don Sahong Hydropower Project, Pak Beng Hydropower Project, and lessons learned from the Srepok River and Se San River case studies commissioned by the MRC under the generous supports of development partners around the globe. As adopted, the TbEIA Guidelines have been designed as a supporting mechanism to the national EIA legislation, processes and systems in each Member Country. In recognition of the already agreed mechanisms, the TbEIA Guidelines build on and supplement the agreements stipulated in the 1995 Agreement on the Cooperation for the Sustainable Development of the Mekong River Basin and its Procedural Rules, in addressing potential transboundary environmental impacts of development projects and ensuring mutual benefits from the Mekong River and its resources. Since its adoption in 2022, the TbEIA Guidelines have already been voluntary implemented by Lao PDR on its underdevelopment Sekong A Downstream Hydropower Project, located on the Sekong River – a major tributary of the Mekong River. While this implementation is ongoing with results expected in early 2024, the implementation thus far has strengthened cooperation among concerned Member Countries with multiple successful open dialogues organized at national and regional levels. It is hope that lessons learnt from this application would lead to a wider application of the TbEIA Guidelines for future water resources development projects in the LMB.

Keywords: transboundary, EIA, lower mekong river basin, mekong river

Procedia PDF Downloads 37
488 A Conceptual Model of the Factors Affecting Saudi Citizens' Use of Social Media to Communicate with the Government

Authors: Reemiah Alotaibi, Muthu Ramachandran, Ah-Lian Kor, Amin Hosseinian-Far

Abstract:

In the past decade, developers of Web 2.0 technologies have shown increasing interest in the topic of e-government. There has been a rapid growth in social media technology because of its significant role in backing up some essential social needs. Its importance and power is derived from its capacity to support two-way communication. Governments are curious to get engaged in these websites, hoping to benefit from the new forms of communication and interaction offered by such technology. Greater participation by the public can be viewed as a chief indicator of effective government communication. Yet, the level of public participation in government 2.0 is not quite satisfactory. In general, it is still at the early stage in most developing countries, including Saudi Arabia. Although it is a fact that Saudi people are among the most active in using social media, the number of people who use social media to communicate with the public institutions is not high. Furthermore, most of the governmental organisations are not using social media tools to communicate with the public. They use these platforms to disseminate information. Our study focuses on the factors affecting citizens’ adoption of social media in Saudi Arabia. Our research question is: what are the factors affecting Saudi citizens’ use of social media to communicate with the government? To answer this research question, the research aims to validate the UTAUT model for examining social media tools from the citizen perspective. An amendment will be proposed to fit the adoption of social media platforms as a communication channel in government by using a developed conceptual model which integrates constructs from the UTAUT model and others external variables based on the literature review. The set of potential factors that affect these citizens' decisions to adopt social media to communicate with their government has been identified as perceived encouragement, trust and cultural influence. The connection between the above-mentioned constructs from the basis for the research hypothesis will be examined in the light of a quantitative methodology. Data collection will be performed through a survey targeting a number of Saudi citizens who are social media users. The data collected from the primary survey will later be analysed by using statistical methods. The outcomes of this research project are argued to have potential contributions to the fields of social media and e-Government adoption, both on the theoretical and practical levels. It is believed that this research project is the first of its type that attempts to identify the factors that affect citizens’ adoption of social media to communicate with the government. The importance of identifying these factors stems from the potential use of them to enhance the government’s implementation of social media and help in making more accurate decisions and strategies based on comprehending the most important factors that affect citizens’ decisions.

Keywords: social media, adoption, citizen, UTAUT model

Procedia PDF Downloads 418
487 An Analytical Metric and Process for Critical Infrastructure Architecture System Availability Determination in Distributed Computing Environments under Infrastructure Attack

Authors: Vincent Andrew Cappellano

Abstract:

In the early phases of critical infrastructure system design, translating distributed computing requirements to an architecture has risk given the multitude of approaches (e.g., cloud, edge, fog). In many systems, a single requirement for system uptime / availability is used to encompass the system’s intended operations. However, when architected systems may perform to those availability requirements only during normal operations and not during component failure, or during outages caused by adversary attacks on critical infrastructure (e.g., physical, cyber). System designers lack a structured method to evaluate availability requirements against candidate system architectures through deep degradation scenarios (i.e., normal ops all the way down to significant damage of communications or physical nodes). This increases risk of poor selection of a candidate architecture due to the absence of insight into true performance for systems that must operate as a piece of critical infrastructure. This research effort proposes a process to analyze critical infrastructure system availability requirements and a candidate set of systems architectures, producing a metric assessing these architectures over a spectrum of degradations to aid in selecting appropriate resilient architectures. To accomplish this effort, a set of simulation and evaluation efforts are undertaken that will process, in an automated way, a set of sample requirements into a set of potential architectures where system functions and capabilities are distributed across nodes. Nodes and links will have specific characteristics and based on sampled requirements, contribute to the overall system functionality, such that as they are impacted/degraded, the impacted functional availability of a system can be determined. A machine learning reinforcement-based agent will structurally impact the nodes, links, and characteristics (e.g., bandwidth, latency) of a given architecture to provide an assessment of system functional uptime/availability under these scenarios. By varying the intensity of the attack and related aspects, we can create a structured method of evaluating the performance of candidate architectures against each other to create a metric rating its resilience to these attack types/strategies. Through multiple simulation iterations, sufficient data will exist to compare this availability metric, and an architectural recommendation against the baseline requirements, in comparison to existing multi-factor computing architectural selection processes. It is intended that this additional data will create an improvement in the matching of resilient critical infrastructure system requirements to the correct architectures and implementations that will support improved operation during times of system degradation due to failures and infrastructure attacks.

Keywords: architecture, resiliency, availability, cyber-attack

Procedia PDF Downloads 108
486 Transgenerational Impact of Intrauterine Hyperglycaemia to F2 Offspring without Pre-Diabetic Exposure on F1 Male Offspring

Authors: Jun Ren, Zhen-Hua Ming, He-Feng Huang, Jian-Zhong Sheng

Abstract:

Adverse intrauterine stimulus during critical or sensitive periods in early life, may lead to health risk not only in later life span, but also further generations. Intrauterine hyperglycaemia, as a major feature of gestational diabetes mellitus (GDM), is a typical adverse environment for both F1 fetus and F1 gamete cells development. However, there is scare information of phenotypic difference of metabolic memory between somatic cells and germ cells exposed by intrauterine hyperglycaemia. The direct transmission effect of intrauterine hyperglycaemia per se has not been assessed either. In this study, we built a GDM mice model and selected male GDM offspring without pre-diabetic phenotype as our founders, to exclude postnatal diabetic influence on gametes, thereby investigate the direct transmission effect of intrauterine hyperglycaemia exposure on F2 offspring, and we further compared the metabolic difference of affected F1-GDM male offspring and F2 offspring. A GDM mouse model of intrauterine hyperglycemia was established by intraperitoneal injection of streptozotocin after pregnancy. Pups of GDM mother were fostered by normal control mothers. All the mice were fed with standard food. Male GDM offspring without metabolic dysfunction phenotype were crossed with normal female mice to obtain F2 offspring. Body weight, glucose tolerance test, insulin tolerance test and homeostasis model of insulin resistance (HOMA-IR) index were measured in both generations at 8 week of age. Some of F1-GDM male mice showed impaired glucose tolerance (p < 0.001), none of F1-GDM male mice showed impaired insulin sensitivity. Body weight of F1-GDM mice showed no significance with control mice. Some of F2-GDM offspring exhibited impaired glucose tolerance (p < 0.001), all the F2-GDM offspring exhibited higher HOMA-IR index (p < 0.01 of normal glucose tolerance individuals vs. control, p < 0.05 of glucose intolerance individuals vs. control). All the F2-GDM offspring exhibited higher ITT curve than control (p < 0.001 of normal glucose tolerance individuals, p < 0.05 of glucose intolerance individuals, vs. control). F2-GDM offspring had higher body weight than control mice (p < 0.001 of normal glucose tolerance individuals, p < 0.001 of glucose intolerance individuals, vs. control). While glucose intolerance is the only phenotype that F1-GDM male mice may exhibit, F2 male generation of healthy F1-GDM father showed insulin resistance, increased body weight and/or impaired glucose tolerance. These findings imply that intrauterine hyperglycaemia exposure affects germ cells and somatic cells differently, thus F1 and F2 offspring demonstrated distinct metabolic dysfunction phenotypes. And intrauterine hyperglycaemia exposure per se has a strong influence on F2 generation, independent of postnatal metabolic dysfunction exposure.

Keywords: inheritance, insulin resistance, intrauterine hyperglycaemia, offspring

Procedia PDF Downloads 238