Search results for: health system reconstruction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25299

Search results for: health system reconstruction

1569 Cross-Validation of the Data Obtained for ω-6 Linoleic and ω-3 α-Linolenic Acids Concentration of Hemp Oil Using Jackknife and Bootstrap Resampling

Authors: Vibha Devi, Shabina Khanam

Abstract:

Hemp (Cannabis sativa) possesses a rich content of ω-6 linoleic and ω-3 linolenic essential fatty acid in the ratio of 3:1, which is a rare and most desired ratio that enhances the quality of hemp oil. These components are beneficial for the development of cell and body growth, strengthen the immune system, possess anti-inflammatory action, lowering the risk of heart problem owing to its anti-clotting property and a remedy for arthritis and various disorders. The present study employs supercritical fluid extraction (SFE) approach on hemp seed at various conditions of parameters; temperature (40 - 80) °C, pressure (200 - 350) bar, flow rate (5 - 15) g/min, particle size (0.430 - 1.015) mm and amount of co-solvent (0 - 10) % of solvent flow rate through central composite design (CCD). CCD suggested 32 sets of experiments, which was carried out. As SFE process includes large number of variables, the present study recommends the application of resampling techniques for cross-validation of the obtained data. Cross-validation refits the model on each data to achieve the information regarding the error, variability, deviation etc. Bootstrap and jackknife are the most popular resampling techniques, which create a large number of data through resampling from the original dataset and analyze these data to check the validity of the obtained data. Jackknife resampling is based on the eliminating one observation from the original sample of size N without replacement. For jackknife resampling, the sample size is 31 (eliminating one observation), which is repeated by 32 times. Bootstrap is the frequently used statistical approach for estimating the sampling distribution of an estimator by resampling with replacement from the original sample. For bootstrap resampling, the sample size is 32, which was repeated by 100 times. Estimands for these resampling techniques are considered as mean, standard deviation, variation coefficient and standard error of the mean. For ω-6 linoleic acid concentration, mean value was approx. 58.5 for both resampling methods, which is the average (central value) of the sample mean of all data points. Similarly, for ω-3 linoleic acid concentration, mean was observed as 22.5 through both resampling. Variance exhibits the spread out of the data from its mean. Greater value of variance exhibits the large range of output data, which is 18 for ω-6 linoleic acid (ranging from 48.85 to 63.66 %) and 6 for ω-3 linoleic acid (ranging from 16.71 to 26.2 %). Further, low value of standard deviation (approx. 1 %), low standard error of the mean (< 0.8) and low variance coefficient (< 0.2) reflect the accuracy of the sample for prediction. All the estimator value of variance coefficients, standard deviation and standard error of the mean are found within the 95 % of confidence interval.

Keywords: resampling, supercritical fluid extraction, hemp oil, cross-validation

Procedia PDF Downloads 141
1568 A Report of 5-Months-Old Baby with Balanced Chromosomal Rearrangements along with Phenotypic Abnormalities

Authors: Mohit Kumar, Beklashwar Salona, Shiv Murti, Mukesh Singh

Abstract:

We report here a case of five-months old male baby, born as second child of non-consanguineous parents with no considerable history of genetic abnormality which was referred to our cytogenetic laboratory for chromosomal analysis. Physical dysmorphic facial features including mongoloid face, cleft palate, simian crease, and developmental delay were observed. We present this case with unique balanced autosomal translocation of t(3;10)(p21;p13). The risk of phenotypic abnormalities based on de novo balanced translocation was estimated to be 7%. The association of balanced chromosomal rearrangement with Down syndrome features such as multiple congenital anomalies, facial dysmorphism and congenital heart anomalies are very rare in a 5-months old male child. Trisomy-21 is not uncommon in chromosomal abnormality with the birth defect and balanced translocations are frequently observed in patients with secondary infertility or recurrent spontaneous abortion (RSA). Two ml heparinized peripheral blood cells cultured in RPMI-1640 for 72 hours supplemented with 20% fetal bovine serum, phytohemagglutinin (PHA), and antibiotics were used for chromosomal analysis. A total 30 metaphases images were captured using Olympus-BX51 microscope and analyzed using Bio-view karyotyping software through GTG-banding (G bands by trypsin and Giemsa) according to International System for Human Cytogenetic Nomenclature 2016. The results showed balanced translocation between short arm of chromosome # 3 and short arm of chromosome # 10. The karyotype of the child was found to be 46,XY,t(3;10)(p21; p13). Chromosomal abnormalities are one of the major causes of birth defect in new born babies. Also, balanced translocations are frequently observed in patients with secondary infertility or recurrent spontaneous abortion. The index case presented with dysmorphic facial features and had a balanced translocation 46,XY,t(3;10)(p21;p13). This translocation with break points at (p21; p13) has not been reported in the literature in a child with facial dysmorphism. To the best of our knowledge, this is the first report of novel balanced translocation t(3;10) with break points in a child with dysmorphic features. We found balanced chromosomal translocation instead of any trisomy or unbalanced aberrations along with some phenotypic abnormalities. Therefore, we suggest that such novel balanced translocation with abnormal phenotype should be reported in order to enable the pathologist, pediatrician, and gynecologist to have a better insight into the intricacies of chromosomal abnormalities and their associated phenotypic features. We hypothesized that dysmorphic features as seen in this case may be the result of change in the pattern of genes located at the breakpoint area in balanced translocations or may be due to deletion or mutation of genes located on the p-arm of chromosome # 3 and p-arm of chromosome # 10.

Keywords: balanced translocation, karyotyping, phenotypic abnormalities, facial dimorphisms

Procedia PDF Downloads 209
1567 Informed Urban Design: Minimizing Urban Heat Island Intensity via Stochastic Optimization

Authors: Luis Guilherme Resende Santos, Ido Nevat, Leslie Norford

Abstract:

The Urban Heat Island (UHI) is characterized by increased air temperatures in urban areas compared to undeveloped rural surrounding environments. With urbanization and densification, the intensity of UHI increases, bringing negative impacts on livability, health and economy. In order to reduce those effects, it is required to take into consideration design factors when planning future developments. Given design constraints such as population size and availability of area for development, non-trivial decisions regarding the buildings’ dimensions and their spatial distribution are required. We develop a framework for optimization of urban design in order to jointly minimize UHI intensity and buildings’ energy consumption. First, the design constraints are defined according to spatial and population limits in order to establish realistic boundaries that would be applicable in real life decisions. Second, the tools Urban Weather Generator (UWG) and EnergyPlus are used to generate outputs of UHI intensity and total buildings’ energy consumption, respectively. Those outputs are changed based on a set of variable inputs related to urban morphology aspects, such as building height, urban canyon width and population density. Lastly, an optimization problem is cast where the utility function quantifies the performance of each design candidate (e.g. minimizing a linear combination of UHI and energy consumption), and a set of constraints to be met is set. Solving this optimization problem is difficult, since there is no simple analytic form which represents the UWG and EnergyPlus models. We therefore cannot use any direct optimization techniques, but instead, develop an indirect “black box” optimization algorithm. To this end we develop a solution that is based on stochastic optimization method, known as the Cross Entropy method (CEM). The CEM translates the deterministic optimization problem into an associated stochastic optimization problem which is simple to solve analytically. We illustrate our model on a typical residential area in Singapore. Due to fast growth in population and built area and land availability generated by land reclamation, urban planning decisions are of the most importance for the country. Furthermore, the hot and humid climate in the country raises the concern for the impact of UHI. The problem presented is highly relevant to early urban design stages and the objective of such framework is to guide decision makers and assist them to include and evaluate urban microclimate and energy aspects in the process of urban planning.

Keywords: building energy consumption, stochastic optimization, urban design, urban heat island, urban weather generator

Procedia PDF Downloads 131
1566 Leptospira Lipl32-Specific Antibodies: Therapeutic Property, Epitopes Characterization and Molecular Mechanisms of Neutralization

Authors: Santi Maneewatchararangsri, Wanpen Chaicumpa, Patcharin Saengjaruk, Urai Chaisri

Abstract:

Leptospirosis is a globally neglected disease that continues to be a significant public health and veterinary burden, with millions of cases reported each year. Early and accurate differential diagnosis of leptospirosis from other febrile illnesses and the development of a broad spectrum of leptospirosis vaccines are needed. The LipL32 outer membrane lipoprotein is a member of Leptospira adhesive matrices and has been found to exert hemolytic activity to erythrocytes in vitro. Therefore, LipL32 is regarded as a potential target for diagnosis, broad-spectrum leptospirosis vaccines, and for passive immunotherapy. In this study, we established LipL32-specific mouse monoclonal antibodies, mAbLPF1 and mAbLPF2, and their respective mouse- and humanized-engineered single chain variable fragment (ScFv). Their antibodies’ neutralizing activities against Leptospira-mediated hemolysis in vitro, and the therapeutic efficacy of mAbs against heterologous Leptospira infected hamsters were demonstrated. The epitope peptide of mAb LPF1 was mapped to a non-contiguous carboxy-terminal β-turn and amphipathic α-helix of LipL32 structure contributing to phospholipid/host cell adhesion and membrane insertion. We found that the mAbLPF2 epitope was located on the interacting loop of peptide binding groove of the LipL32 molecule responsible for interactions with host constituents. Epitope sequences are highly conserved among Leptospira spp. and are absent from the LipL32 superfamily of other microorganisms. Both epitopes are surface-exposed, readily accessible by mAbs, and immunogenic. However, they are less dominant when revealed by LipL32-specific immunoglobulins from leptospirosis-patient sera and rabbit hyperimmune serum raised by whole Leptospira. Our study also demonstrated an adhesion inhibitory activity of LipL32 protein to host membrane components and cells mediated by mAbs as well as an anti-hemolytic activity of the respective antibodies. The therapeutic antibodies, particularly the humanized-ScFv, have a potential for further development as non-drug therapeutic agent for human leptospirosis, especially in subjects allergic to antibiotics. The epitope peptides recognized by two therapeutic mAbs have potential use as tools for structure-function studies. Finally, protective peptides may be used as a target for epitope-based vaccines for control of leptospirosis.

Keywords: leptospira lipl32-specific antibodies, therapeutic epitopes, epitopes characterization, immunotherapy

Procedia PDF Downloads 297
1565 A Crowdsourced Homeless Data Collection System and Its Econometric Analysis: Strengthening Inclusive Public Administration Policies

Authors: Praniil Nagaraj

Abstract:

This paper proposes a method to collect homeless data using crowdsourcing and presents an approach to analyze the data, demonstrating its potential to strengthen existing and future policies aimed at promoting socio-economic equilibrium. This paper's contributions can be categorized into three main areas. Firstly, a unique method for collecting homeless data is introduced, utilizing a user-friendly smartphone app (currently available for Android). The app enables the general public to quickly record information about homeless individuals, including the number of people and details about their living conditions. The collected data, including date, time, and location, is anonymized and securely transmitted to the cloud. It is anticipated that an increasing number of users motivated to contribute to society will adopt the app, thus expanding the data collection efforts. Duplicate data is addressed through simple classification methods, and historical data is utilized to fill in missing information. The second contribution of this paper is the description of data analysis techniques applied to the collected data. By combining this new data with existing information, statistical regression analysis is employed to gain insights into various aspects, such as distinguishing between unsheltered and sheltered homeless populations, as well as examining their correlation with factors like unemployment rates, housing affordability, and labor demand. Initial data is collected in San Francisco, while pre-existing information is drawn from three cities: San Francisco, New York City, and Washington D.C., facilitating the conduction of simulations. The third contribution focuses on demonstrating the practical implications of the data processing results. The challenges faced by key stakeholders, including charitable organizations and local city governments, are taken into consideration. Two case studies are presented as examples. The first case study explores improving the efficiency of food and necessities distribution, as well as medical assistance, driven by charitable organizations. The second case study examines the correlation between micro-geographic budget expenditure by local city governments and homeless information to justify budget allocation and expenditures. The ultimate objective of this endeavor is to enable the continuous enhancement of the quality of life for the underprivileged. It is hoped that through increased crowdsourcing of data from the public, the Generosity Curve and the Need Curve will intersect, leading to a better world for all.

Keywords: crowdsourcing, homelessness, socio-economic policies, statistical analysis

Procedia PDF Downloads 44
1564 Corpus Linguistics as a Tool for Translation Studies Analysis: A Bilingual Parallel Corpus of Students’ Translations

Authors: Juan-Pedro Rica-Peromingo

Abstract:

Nowadays, corpus linguistics has become a key research methodology for Translation Studies, which broadens the scope of cross-linguistic studies. In the case of the study presented here, the approach used focuses on learners with little or no experience to study, at an early stage, general mistakes and errors, the correct or incorrect use of translation strategies, and to improve the translational competence of the students. Led by Sylviane Granger and Marie-Aude Lefer of the Centre for English Corpus Linguistics of the University of Louvain, the MUST corpus (MUltilingual Student Translation Corpus) is an international project which brings together partners from Europe and worldwide universities and connects Learner Corpus Research (LCR) and Translation Studies (TS). It aims to build a corpus of translations carried out by students including both direct (L2 > L1) an indirect (L1 > L2) translations, from a great variety of text types, genres, and registers in a wide variety of languages: audiovisual translations (including dubbing, subtitling for hearing population and for deaf population), scientific, humanistic, literary, economic and legal translation texts. This paper focuses on the work carried out by the Spanish team from the Complutense University (UCMA), which is part of the MUST project, and it describes the specific features of the corpus built by its members. All the texts used by UCMA are either direct or indirect translations between English and Spanish. Students’ profiles comprise translation trainees, foreign language students with a major in English, engineers studying EFL and MA students, all of them with different English levels (from B1 to C1); for some of the students, this would be their first experience with translation. The MUST corpus is searchable via Hypal4MUST, a web-based interface developed by Adam Obrusnik from Masaryk University (Czech Republic), which includes a translation-oriented annotation system (TAS). A distinctive feature of the interface is that it allows source texts and target texts to be aligned, so we can be able to observe and compare in detail both language structures and study translation strategies used by students. The initial data obtained point out the kind of difficulties encountered by the students and reveal the most frequent strategies implemented by the learners according to their level of English, their translation experience and the text genres. We have also found common errors in the graduate and postgraduate university students’ translations: transfer errors, lexical errors, grammatical errors, text-specific translation errors, and cultural-related errors have been identified. Analyzing all these parameters will provide more material to bring better solutions to improve the quality of teaching and the translations produced by the students.

Keywords: corpus studies, students’ corpus, the MUST corpus, translation studies

Procedia PDF Downloads 147
1563 The Effects of Chamomile on Serum Levels of Inflammatory Indexes to a Bout of Eccentric Exercise in Young Women

Authors: K. Azadeh, M. Ghasemi, S. Fazelifar

Abstract:

Aim: Changes in stress hormones can be modify response of immune system. Cortisol as the most important body corticosteroid is anti-inflammatory and immunosuppressive hormone. Normal levels of cortisol in humans has fluctuated during the day, In other words, cortisol is released periodically, and regulate through the release of ACTH circadian rhythm in every day. Therefore, the aim of this study was to determine the effects of Chamomile on serum levels of inflammatory indexes to a bout of eccentric exercise in young women. Methodology: 32 women were randomly divided into 4 groups: high dose of Chamomile, low dose of Chamomile, ibuprofen and placebo group. Eccentric exercise included 5 set and rest period between sets was 1 minute. For this purpose, subjects warm up 10 min and then done eccentric exercise. Each participant completed 15 repetitions with optional 20 kg weight or until can’t continue moving. When the subject was no longer able to continue to move, immediately decreased 5 kg from the weight and the protocol continued until cause exhaustion or complete 15 repetitions. Also, subjects received specified amount of ibuprofen and Chamomile capsules in target groups. Blood samples in 6 stages (pre of starting pill, pre of exercise protocol, 4, 24, 48 and 72 hours after eccentric exercise) was obtained. The levels of cortisol and adrenocorticotropic hormone levels were measured by ELISA way. K-S test to determine the normality of the data and analysis of variance for repeated measures was used to analyze the data. A significant difference in the p < 0/05 accepted. Results: The results showed that Individual characteristics including height, weight, age and body mass index were not significantly different among the four groups. Analyze of data showed that cortisol and ACTH basic levels significantly decreased after supplementation consumption, but then gradually significantly increased in all stages of post exercise. In High dose of Chamomile group, increasing tendency of post exercise somewhat less than other groups, but not to a significant level. The inter-group analysis results indicate that time effect had a significant impact in different stages of the groups. Conclusion: The results of this study, one session of eccentric exercise increased cortisol and ACTH hormone. The results represent the effect of high dose of Chamomile in the prevention and reduction of increased stress hormone levels. As regards use of medicinal plants and ibuprofen as a pain medication and inflammation has spread among athletes and non-athletes, the results of this research can provide information about the advantages and disadvantages of using medicinal plants and ibuprofen.

Keywords: chamomile, inflammatory indexes, eccentric exercise, young girls

Procedia PDF Downloads 417
1562 When the Rubber Hits the Road: The Enactment of Well-Intentioned Language Policy in Digital vs. In Situ Spaces on Washington, DC Public Transportation

Authors: Austin Vander Wel, Katherin Vargas Henao

Abstract:

Washington, DC, is a city in which Spanish, along with several other minority languages, is prevalent not only among tourists but also those living within city limits. In response to this linguistic diversity and DC’s adoption of the Language Access Act in 2004, the Washington Metropolitan Area Transit Authority (WMATA) committed to addressing the need for equal linguistic representation and established a five-step plan to provide the best multilingual information possible for public transportation users. The current study, however, strongly suggests that this de jure policy does not align with the reality of Spanish’s representation on DC public transportation–although perhaps doing so in an unexpected way. In order to investigate Spanish’s de facto representation and how it contrasts with de jure policy, this study implements a linguistic landscapes methodology that takes critical language-policy as its theoretical framework (Tollefson, 2005). Specifically concerning de facto representation, it focuses on the discrepancies between digital spaces and the actual physical spaces through which users travel. These digital vs. in situ conditions are further analyzed by separately addressing aural and visual modalities. In digital spaces, data was collected from WMATA’s website (visual) and their bilingual hotline (aural). For in situ spaces, both bus and metro areas of DC public transportation were explored, with signs comprising the visual modality and recordings, driver announcements, and interactions with metro kiosk workers comprising the aural modality. While digital spaces were considered to successfully fulfill WMATA’s commitment to representing Spanish as outlined in the de jure policy, physical spaces show a large discrepancy between what is said and what is done, particularly regarding the bus system, in addition to the aural modality overall. These discrepancies in situ spaces place Spanish speakers at a clear disadvantage, demanding additional resources and knowledge on the part of residents with limited or no English proficiency in order to have equal access to this public good. Based on our critical language-policy analysis, while Spanish is represented as a right in the de jure policy, its implementation in situ clearly portrays Spanish as a problem since those seeking bilingual information can not expect it to be present when and where they need it most (Ruíz, 1984; Tollefson, 2005). This study concludes with practical, data-based steps to improve the current situation facing DC’s public transportation context and serves as a model for responding to inadequate enactment of de jure policy in other language policy settings.

Keywords: Urban landscape, language access, critical-language policy, spanish, public transportation

Procedia PDF Downloads 72
1561 Controlled Drug Delivery System for Delivery of Poor Water Soluble Drugs

Authors: Raj Kumar, Prem Felix Siril

Abstract:

The poor aqueous solubility of many pharmaceutical drugs and potential drug candidates is a big challenge in drug development. Nanoformulation of such candidates is one of the major solutions for the delivery of such drugs. We initially developed the evaporation assisted solvent-antisolvent interaction (EASAI) method. EASAI method is use full to prepared nanoparticles of poor water soluble drugs with spherical morphology and particles size below 100 nm. However, to further improve the effect formulation to reduce number of dose and side effect it is important to control the delivery of drugs. However, many drug delivery systems are available. Among the many nano-drug carrier systems, solid lipid nanoparticles (SLNs) have many advantages over the others such as high biocompatibility, stability, non-toxicity and ability to achieve controlled release of drugs and drug targeting. SLNs can be administered through all existing routes due to high biocompatibility of lipids. SLNs are usually composed of lipid, surfactant and drug were encapsulated in lipid matrix. A number of non-steroidal anti-inflammatory drugs (NSAIDs) have poor bioavailability resulting from their poor aqueous solubility. In the present work, SLNs loaded with NSAIDs such as Nabumetone (NBT), Ketoprofen (KP) and Ibuprofen (IBP) were successfully prepared using different lipids and surfactants. We studied and optimized experimental parameters using a number of lipids, surfactants and NSAIDs. The effect of different experimental parameters such as lipid to surfactant ratio, volume of water, temperature, drug concentration and sonication time on the particles size of SLNs during the preparation using hot-melt sonication was studied. It was found that particles size was directly proportional to drug concentration and inversely proportional to surfactant concentration, volume of water added and temperature of water. SLNs prepared at optimized condition were characterized thoroughly by using different techniques such as dynamic light scattering (DLS), field emission scanning electron microscopy (FESEM), transmission electron microscopy (TEM), atomic force microscopy (AFM), X-ray diffraction (XRD) and differential scanning calorimetry and Fourier transform infrared spectroscopy (FTIR). We successfully prepared the SLN of below 220 nm using different lipids and surfactants combination. The drugs KP, NBT and IBP showed 74%, 69% and 53% percentage of entrapment efficiency with drug loading of 2%, 7% and 6% respectively in SLNs of Campul GMS 50K and Gelucire 50/13. In-vitro drug release profile of drug loaded SLNs is shown that nearly 100% of drug was release in 6 h.

Keywords: nanoparticles, delivery, solid lipid nanoparticles, hot-melt sonication, poor water soluble drugs, solubility, bioavailability

Procedia PDF Downloads 312
1560 Developing and Shake Table Testing of Semi-Active Hydraulic Damper as Active Interaction Control Device

Authors: Ming-Hsiang Shih, Wen-Pei Sung, Shih-Heng Tung

Abstract:

Semi-active control system for structure under excitation of earthquake provides with the characteristics of being adaptable and requiring low energy. DSHD (Displacement Semi-Active Hydraulic Damper) was developed by our research team. Shake table test results of this DSHD installed in full scale test structure demonstrated that this device brought its energy-dissipating performance into full play for test structure under excitation of earthquake. The objective of this research is to develop a new AIC (Active Interaction Control Device) and apply shake table test to perform its dissipation of energy capability. This new proposed AIC is converting an improved DSHD (Displacement Semi-Active Hydraulic Damper) to AIC with the addition of an accumulator. The main concept of this energy-dissipating AIC is to apply the interaction function of affiliated structure (sub-structure) and protected structure (main structure) to transfer the input seismic force into sub-structure to reduce the structural deformation of main structure. This concept is tested using full-scale multi-degree of freedoms test structure, installed with this proposed AIC subjected to external forces of various magnitudes, for examining the shock absorption influence of predictive control, stiffness of sub-structure, synchronous control, non-synchronous control and insufficient control position. The test results confirm: (1) this developed device is capable of diminishing the structural displacement and acceleration response effectively; (2) the shock absorption of low precision of semi-active control method did twice as much seismic proof efficacy as that of passive control method; (3) active control method may not exert a negative influence of amplifying acceleration response of structure; (4) this AIC comes into being time-delay problem. It is the same problem of ordinary active control method. The proposed predictive control method can overcome this defect; (5) condition switch is an important characteristics of control type. The test results show that synchronism control is very easy to control and avoid stirring high frequency response. This laboratory results confirm that the device developed in this research is capable of applying the mutual interaction between the subordinate structure and the main structure to be protected is capable of transforming the quake energy applied to the main structure to the subordinate structure so that the objective of minimizing the deformation of main structural can be achieved.

Keywords: DSHD (Displacement Semi-Active Hydraulic Damper), AIC (Active Interaction Control Device), shake table test, full scale structure test, sub-structure, main-structure

Procedia PDF Downloads 519
1559 The Relationship between Violence against Women and Levels of Self-Esteem in Urban Informal Settlements of Mumbai, India: A Cross-Sectional Study

Authors: A. Bentley, A. Prost, N. Daruwalla, D. Osrin

Abstract:

Background: This study aims to investigate the relationship between experiences of violence against women in the family, and levels of self-esteem in women residing in informal settlement (slum) areas of Mumbai, India. The authors hypothesise that violence against women in Indian households extends beyond that of intimate partner violence (IPV), to include other members of the family and that experiences of violence are associated with lower levels of self-esteem. Methods: Experiences of violence were assessed through a cross-sectional survey of 598 women, including questions about specific acts of emotional, economic, physical and sexual violence across different time points, and the main perpetrator of each. Self-esteem was assessed using the Rosenberg self-esteem questionnaire. A global score for self-esteem was calculated and the relationship between violence in the past year and Rosenberg self-esteem score was assessed using multivariable linear regression models, adjusted for years of education completed, and clustering using robust standard errors. Results: 482 (81%) women consented to interview. On average, they were 28.5 years old, had completed 6 years of education and had been married 9.5 years. 88% were Muslim and 46% lived in joint families. 44% of women had experienced at least one act of violence in their lifetime (33% emotional, 22% economic, 24% physical, 12% sexual). Of the women who experienced violence after marriage, 70% cited a perpetrator other than the husband for at least one of the acts. 5% had low self-esteem (Rosenberg score < 15). For women who experienced emotional violence in the past year, the Rosenberg score was 2.6 points lower (p < 0.001). It was 1.2 points lower (p = 0.03) for women who experienced economic violence. For physical or sexual violence in the past year, no statistically significant relationship with Rosenberg score was seen. However, for a one-unit increase in the number of different acts of each type of violence experienced in the past year, a decrease in Rosenberg score was seen (-0.62 for emotional, -0.76 for economic, -0.53 for physical and -0.47 for sexual; p < 0.05 for all). Discussion: The high prevalence of violence experiences across the lifetime was likely due to the detailed assessment of violence and the inclusion of perpetrators within the family other than the husband. Experiences of emotional or economic violence in the past year were associated with lower Rosenberg scores and therefore lower self-esteem, but no relationship was seen between experiences of physical or sexual violence and Rosenberg score overall. For all types of violence in the past year, a greater number of different acts were associated with a decrease in Rosenberg score. Emotional violence showed the strongest relationship with self-esteem, but for all types of violence the more complex the pattern of perpetration with different methods used, the lower the levels of self-esteem. Due to the cross-sectional nature of the study causal directionality cannot be attributed. Further work to investigate the relationship between severity of violence and self-esteem and whether self-esteem mediates relationships between violence and poorer mental health would be beneficial.

Keywords: family violence, India, informal settlements, Rosenberg self-esteem scale, self-esteem, violence against women

Procedia PDF Downloads 126
1558 Dose Saving and Image Quality Evaluation for Computed Tomography Head Scanning with Eye Protection

Authors: Yuan-Hao Lee, Chia-Wei Lee, Ming-Fang Lin, Tzu-Huei Wu, Chih-Hsiang Ko, Wing P. Chan

Abstract:

Computed tomography (CT) scan of the head is a good method for investigating cranial lesions. However, radiation-induced oxidative stress can be accumulated in the eyes and promote carcinogenesis and cataract. In this regard, we aimed to protect the eyes with barium sulfate shield(s) during CT scans and investigate the resultant image quality and radiation dose to the eye. Patients who underwent health examinations were selectively enrolled in this study in compliance with the protocol approved by the Ethics Committee of the Joint Institutional Review Board at Taipei Medical University. Participants’ brains were scanned with a water-based marker simultaneously by a multislice CT scanner (SOMATON Definition Flash) under a fixed tube current-time setting or automatic tube current modulation (TCM). The lens dose was measured by Gafchromic films, whose dose response curve was previously fitted using thermoluminescent dosimeters, with or without barium sulfate or bismuth-antimony shield laid above. For the assessment of image quality CT images at slice planes that exhibit the interested regions on the zygomatic, orbital and nasal bones of the head phantom as well as the water-based marker were used for calculating the signal-to-noise and contrast-to-noise ratios. The application of barium sulfate and bismuth-antimony shields decreased 24% and 47% of the lens dose on average, respectively. Under topogram-based TCM, the dose saving power of bismuth-antimony shield was mitigated whereas that of barium sulfate shield was enhanced. On the other hand, the signal-to-noise and contrast-to-noise ratios of DSCT images were decreased separately by barium sulfate and bismuth-antimony shield, resulting in an overall reduction of the CNR. In contrast, the integration of topogram-based TCM elevated signal difference between the ROIs on the zygomatic bones and eyeballs while preferentially decreasing the signal-to-noise ratios upon the use of barium sulfate shield. The results of this study indicate that the balance between eye exposure and image quality can be optimized by combining eye shields with topogram-based TCM on the multislice scanner. Eye shielding could change the photon attenuation characteristics of tissues that are close to the shield. The application of both shields on eye protection hence is not recommended for seeking intraorbital lesions.

Keywords: computed tomography, barium sulfate shield, dose saving, image quality

Procedia PDF Downloads 268
1557 Production of Ferroboron by SHS-Metallurgy from Iron-Containing Rolled Production Wastes for Alloying of Cast Iron

Authors: G. Zakharov, Z. Aslamazashvili, M. Chikhradze, D. Kvaskhvadze, N. Khidasheli, S. Gvazava

Abstract:

Traditional technologies for processing iron-containing industrial waste, including steel-rolling production, are associated with significant energy costs, the long duration of processes, and the need to use complex and expensive equipment. Waste generated during the industrial process negatively affects the environment, but at the same time, it is a valuable raw material and can be used to produce new marketable products. The study of the effectiveness of self-propagating high-temperature synthesis (SHS) methods, which are characterized by the simplicity of the necessary equipment, the purity of the final product, and the high processing speed, is under the wide scientific and practical interest to solve the set problem. The work presents technological aspects of the production of Ferro boron by the method of SHS - metallurgy from iron-containing wastes of rolled production for alloying of cast iron and results of the effect of alloying element on the degree of boron assimilation with liquid cast iron. Features of Fe-B system combustion have been investigated, and the main parameters to control the phase composition of synthesis products have been experimentally established. Effect of overloads on patterns of cast ligatures formation and mechanisms structure formation of SHS products was studied. It has been shown that an increase in the content of hematite Fe₂O₃ in iron-containing waste leads to an increase in the content of phase FeB and, accordingly, the amount of boron in the ligature. Boron content in ligature is within 3-14%, and the phase composition of obtained ligatures consists of Fe₂B and FeB phases. Depending on the initial composition of the wastes, the yield of the end product reaches 91 - 94%, and the extraction of boron is 70 - 88%. Combustion processes of high exothermic mixtures allow to obtain a wide range of boron-containing ligatures from industrial wastes. In view of the relatively low melting point of the obtained SHS-ligature, the positive dynamics of boron absorption by liquid iron is established. According to the obtained data, the degree of absorption of the ligature by alloying gray cast iron at 1450°C is 80-85%. When combined with the treatment of liquid cast iron with magnesium, followed by alloying with the developed ligature, boron losses are reduced by 5-7%. At that, uniform distribution of boron micro-additives in the volume of treated liquid metal is provided. Acknowledgment: This work was supported by Shota Rustaveli Georgian National Science Foundation of Georgia (SRGNSFG) under the GENIE project (grant number № CARYS-19-802).

Keywords: self-propagating high-temperature synthesis, cast iron, industrial waste, ductile iron, structure formation

Procedia PDF Downloads 123
1556 Evaluation on the Compliance of Essential Intrapartum Newborn Care among Nurses in Selected Government Hospital in Manila

Authors: Eliza Torrigue, Efrelyn Iellamo

Abstract:

Maternal death is one of the rising health issues in the Philippines. It is alarming to know that in every hour of each day, a mother gives birth to a child who may not live to see the next day. Statistics shows that intrapartum period and third stage of labor are the very crucial periods for the expectant mother, as well as the first six hours of life for the newborn. To address the issue, The Essential Intrapartum Newborn Care (EINC) was developed. Through this, Obstetric Delivery Room (OB-DR) Nurses shall be updated with the evidence-based maternal and newborn care to ensure patient safety, thus, reducing maternal and child mortality. This study aims to describe the compliance of hospitals, especially of OB-DR nurses, to the EINC Protocols. The researcher aims to link the profile variables of the respondents in terms of age, length of service and formal training to their compliance on the EINC Protocols. The outcome of the study is geared towards the development of appropriate training program for OB-DR Nurses assigned in the delivery room of the hospitals based on the study’s results to sustain the EINC standards. A descriptive correlational method was used. The sample consists of 75 Obstetric Delivery Room (OB-DR) Nurses from three government hospitals in the City of Manila namely, Ospital ng Maynila Medical Center, Tondo Medical Center, and Gat Andres Bonifacio Memorial Medical Center. Data were collected using an evaluative checklist. Ranking, weighted mean, Chi-square and Pearson’s R were used to analyze data. The level of compliance to the EINC Protocols by the respondents was evaluated with an overall mean score of 4.768 implying that OB-DR Nurses have a high regard in complying with the step by step procedure of the EINC. Furthermore, data shows that formal training on EINC have a significant relationship with OB-DR Nurses’ level of compliance during cord care, AMTSL, and immediate newborn care until the first ninety minutes to six hours of life. However, the respondents’ age and length of service do not have a significant relationship with the compliance of OB-DR Nurses on EINC Protocols. In the pursuit of decreasing the maternal mortality in the Philippines, EINC Protocols have been widely implemented in the country especially in the government hospitals where most of the deliveries happen. In this study, it was found out that OB-DR Nurses adhere and are highly compliant to the standards in order to assure that optimum level of care is delivered to the mother and newborn. Formal training on EINC, on the other hand, create the most impact on the compliance of nurses. It is therefore recommended that there must be a structured enhancement training program to plan, implement and evaluate the EINC protocols in these government hospitals.

Keywords: compliance, intrapartum, newborn care, nurses

Procedia PDF Downloads 394
1555 A Study of the Effect of Early and Late Meal Time on Anthropometric and Biochemical Parameters in Patients of Type 2 Diabetes

Authors: Smriti Rastogi, Narsingh Verma

Abstract:

Background: A vast body of research exists on the use of oral hypoglycaemic drugs, insulin injections and the like in managing diabetes but no such research exists that has taken into consideration the parameter of time restricted meal intake and its positive effects in managing diabetes. The utility of this project is immense as it offers a solution to the woes of diabetics based on circadian rhythm and normal physiology of the human body. Method: 80 Diabetics, enrolled from the Out Patient Department of Endocrinology, KGMU (King George's Medical University) were randomly divided based on consent to early dinner TRM(time restricted meal) group or not (control group). Follow up was done at six months and 12 months for anthropometric measurement, height, weight, waist-hip ratio, neck size, fasting, postprandial blood sugar, HbA1c, serum urea, serum creatinine, and lipid profile. The patient was given a clear understanding of chronomedicine and how it affects their health. A single intervention was done - the timing of dinner was at or around 7 pm for TRM group. Result: 65% of TRM group and 40 %(non- TRM) had normal HbA1c after 12 months. HbA1c in TRM Group (first visit to second follow up) had a significant p value=0.017. A p value of <0.0001 was observed on comparing the values of blood sugar (fasting) in TRM Group from the first visit and second follow up. The values of blood sugar (postprandial) in TRM Group (first visit and second follow up) showed a p-value <0.0001 (highly significant). Values of the three parameters were non- significant in the control group. Hip size(First Visit to Second Follow Up) TRM Group showed a p-value = 0.0344 (Significant) (Difference between means=2.762 ± 1.261)Detailed results of the above parameters and a few newer ones will be presented at the conference. Conclusion: Time restricted meal intake in diabetics shows promise and is worth exploring further. Time Restricted Meal intake in Type 2 diabetics has a significant effect in controlling and maintaining HbA1c as the reduction in HbA1c value was very significant in the TRM group vs. the control group. Similar highly significant results were obtained in the case of fasting and postprandial values of blood sugar in the TRM group when compared to the control group. The effects of time restricted meal intake in diabetics show promise and are worth exploring further. It is one of the first studies which have been undertaken in Indian diabetics, although the initial data obtained is encouraging yet further research and study are required to corroborate results.

Keywords: chronomedicine, diabetes, endocrinology, time restricted meal intake

Procedia PDF Downloads 126
1554 'It Is a Sin to Be in Love with a Disabled Woman': Stigma, Rejection and Intersections of Womanhood and Violence among Physically Disabled Women Living in South Africa

Authors: Ingrid Van Der Heijden, Naeemah Abrahams, Jane Harries

Abstract:

Background: Commonly, womanhood is defined as the qualities considered to be natural to or characteristic of a woman. However, womanhood is not a static concept; it is contextual and negotiable. For women with disabilities, gender roles or ‘qualities’ of womanhood are often overstated or contradicted because of assumptions of weakness, passivity, asexuality and infertility. Currently, little is known about how disability stigma intersects with notions of womanhood to make women with disabilities vulnerable to violence, or how women navigate this intersection to prevent or protect themselves from violence. Objective: To describe how the stigmatized constructions of womanhood and disability promote women with physical disabilities’ exposure to or protection from violence. Methods: Qualitative data for this paper comes from a doctoral study involving women with disabilities living in Cape Town, South Africa. It presents data from repeat in-depth interviews with 30 women with a range of physical impairments. Women attending protective workshops, rehabilitative centers and residential care facilities for people living with disabilities were invited to participate. Consent procedures and interviews were conducted by the first author (who is herself a woman living with a physical disability), and a female research assistant/translator who is a qualified occupational therapist. Reasonable accommodation is central to the methodology and the study as a whole. Findings: Descriptive and thematic analyses reveal how stigma and local constructions around womanhood, as well as women’s self-image and physical limitations, promotes women’s exposure to psychological, physical and sexual violence. It reveals how disabled women feel they are presumed incapable of living up to expectations of a ‘proper’ woman. This plays out as psychological violence, with women reporting that they feel ‘devalued,' ‘rejected’ and deprived of lasting intimate relationships. Furthermore, forms of psychological violence perpetuate physical and sexual violence. Women also discuss using strategies to prevent violence; by refusing to date, avoiding certain places or avoiding isolation, creating awareness, hiding their physical impairments, and exaggerating their ‘femininity.' Implications: Service providers need to be made aware of women’s violence experiences, and provide a range of accessible psychological and mental health services to women living with disabilities, as well as raising awareness around disability, and violence prevention, among caregivers, men, and women. Violence awareness and prevention interventions need to involve disability experts, researchers and people with disabilities.

Keywords: disability, gender, stigma, violence awareness and prevention interventions

Procedia PDF Downloads 352
1553 Generation of Roof Design Spectra Directly from Uniform Hazard Spectra

Authors: Amin Asgarian, Ghyslaine McClure

Abstract:

Proper seismic evaluation of Non-Structural Components (NSCs) mandates an accurate estimation of floor seismic demands (i.e. acceleration and displacement demands). Most of the current international codes incorporate empirical equations to calculate equivalent static seismic force for which NSCs and their anchorage system must be designed. These equations, in general, are functions of component mass and peak seismic acceleration to which NSCs are subjected to during the earthquake. However, recent studies have shown that these recommendations are suffered from several shortcomings such as neglecting the higher mode effect, tuning effect, NSCs damping effect, etc. which cause underestimation of the component seismic acceleration demand. This work is aimed to circumvent the aforementioned shortcomings of code provisions as well as improving them by proposing a simplified, practical, and yet accurate approach to generate acceleration Floor Design Spectra (FDS) directly from corresponding Uniform Hazard Spectra (UHS) (i.e. design spectra for structural components). A database of 27 Reinforced Concrete (RC) buildings in which Ambient Vibration Measurements (AVM) have been conducted. The database comprises 12 low-rise, 10 medium-rise, and 5 high-rise buildings all located in Montréal, Canada and designated as post-disaster buildings or emergency shelters. The buildings are subjected to a set of 20 compatible seismic records and Floor Response Spectra (FRS) in terms of pseudo acceleration are derived using the proposed approach for every floor of the building in both horizontal directions considering 4 different damping ratios of NSCs (i.e. 2, 5, 10, and 20% viscous damping). Several effective parameters on NSCs response are evaluated statistically. These parameters comprise NSCs damping ratios, tuning of NSCs natural period with one of the natural periods of supporting structure, higher modes of supporting structures, and location of NSCs. The entire spectral region is divided into three distinct segments namely short-period, fundamental period, and long period region. The derived roof floor response spectra for NSCs with 5% damping are compared with the 5% damping UHS and procedure are proposed to generate roof FDS for NSCs with 5% damping directly from 5% damped UHS in each spectral region. The generated FDS is a powerful, practical, and accurate tool for seismic design and assessment of acceleration-sensitive NSCs particularly in existing post-critical buildings which have to remain functional even after the earthquake and cannot tolerate any damage to NSCs.

Keywords: earthquake engineering, operational and functional components (OFCs), operational modal analysis (OMA), seismic assessment and design

Procedia PDF Downloads 236
1552 A World Map of Seabed Sediment Based on 50 Years of Knowledge

Authors: T. Garlan, I. Gabelotaud, S. Lucas, E. Marchès

Abstract:

Production of a global sedimentological seabed map has been initiated in 1995 to provide the necessary tool for searches of aircraft and boats lost at sea, to give sedimentary information for nautical charts, and to provide input data for acoustic propagation modelling. This original approach had already been initiated one century ago when the French hydrographic service and the University of Nancy had produced maps of the distribution of marine sediments of the French coasts and then sediment maps of the continental shelves of Europe and North America. The current map of the sediment of oceans presented was initiated with a UNESCO's general map of the deep ocean floor. This map was adapted using a unique sediment classification to present all types of sediments: from beaches to the deep seabed and from glacial deposits to tropical sediments. In order to allow good visualization and to be adapted to the different applications, only the granularity of sediments is represented. The published seabed maps are studied, if they present an interest, the nature of the seabed is extracted from them, the sediment classification is transcribed and the resulted map is integrated in the world map. Data come also from interpretations of Multibeam Echo Sounder (MES) imagery of large hydrographic surveys of deep-ocean. These allow a very high-quality mapping of areas that until then were represented as homogeneous. The third and principal source of data comes from the integration of regional maps produced specifically for this project. These regional maps are carried out using all the bathymetric and sedimentary data of a region. This step makes it possible to produce a regional synthesis map, with the realization of generalizations in the case of over-precise data. 86 regional maps of the Atlantic Ocean, the Mediterranean Sea, and the Indian Ocean have been produced and integrated into the world sedimentary map. This work is permanent and permits a digital version every two years, with the integration of some new maps. This article describes the choices made in terms of sediment classification, the scale of source data and the zonation of the variability of the quality. This map is the final step in a system comprising the Shom Sedimentary Database, enriched by more than one million punctual and surface items of data, and four series of coastal seabed maps at 1:10,000, 1:50,000, 1:200,000 and 1:1,000,000. This step by step approach makes it possible to take into account the progresses in knowledge made in the field of seabed characterization during the last decades. Thus, the arrival of new classification systems for seafloor has improved the recent seabed maps, and the compilation of these new maps with those previously published allows a gradual enrichment of the world sedimentary map. But there is still a lot of work to enhance some regions, which are still based on data acquired more than half a century ago.

Keywords: marine sedimentology, seabed map, sediment classification, world ocean

Procedia PDF Downloads 232
1551 Artificial Intelligence and Robotics in the Eye of Private Law with Special Regards to Intellectual Property and Liability Issues

Authors: Barna Arnold Keserű

Abstract:

In the last few years (what is called by many scholars the big data era) artificial intelligence (hereinafter AI) get more and more attention from the public and from the different branches of sciences as well. What previously was a mere science-fiction, now starts to become reality. AI and robotics often walk hand in hand, what changes not only the business and industrial life, but also has a serious impact on the legal system. The main research of the author focuses on these impacts in the field of private law, with special regards to liability and intellectual property issues. Many questions arise in these areas connecting to AI and robotics, where the boundaries are not sufficiently clear, and different needs are articulated by the different stakeholders. Recognizing the urgent need of thinking the Committee on Legal Affairs of the European Parliament adopted a Motion for a European Parliament Resolution A8-0005/2017 (of January 27th, 2017) in order to take some recommendations to the Commission on civil law rules on robotics and AI. This document defines some crucial usage of AI and/or robotics, e.g. the field of autonomous vehicles, the human job replacement in the industry or smart applications and machines. It aims to give recommendations to the safe and beneficial use of AI and robotics. However – as the document says – there are no legal provisions that specifically apply to robotics or AI in IP law, but that existing legal regimes and doctrines can be readily applied to robotics, although some aspects appear to call for specific consideration, calls on the Commission to support a horizontal and technologically neutral approach to intellectual property applicable to the various sectors in which robotics could be employed. AI can generate some content what worth copyright protection, but the question came up: who is the author, and the owner of copyright? The AI itself can’t be deemed author because it would mean that it is legally equal with the human persons. But there is the programmer who created the basic code of the AI, or the undertaking who sells the AI as a product, or the user who gives the inputs to the AI in order to create something new. Or AI generated contents are so far from humans, that there isn’t any human author, so these contents belong to public domain. The same questions could be asked connecting to patents. The research aims to answer these questions within the current legal framework and tries to enlighten future possibilities to adapt these frames to the socio-economical needs. In this part, the proper license agreements in the multilevel-chain from the programmer to the end-user become very important, because AI is an intellectual property in itself what creates further intellectual property. This could collide with data-protection and property rules as well. The problems are similar in the field of liability. We can use different existing forms of liability in the case when AI or AI led robotics cause damages, but it is unsure that the result complies with economical and developmental interests.

Keywords: artificial intelligence, intellectual property, liability, robotics

Procedia PDF Downloads 203
1550 Effectiveness of N-Acetylcysteine in the Treatment of Adults with Trichotillomania: An Evidenced Based Review

Authors: Teresa Sarmento de Beires, Sofia Padilha, Pedro Arantes, Joana Ribeiro, Andreia Eiras

Abstract:

Background: Trichotillomania is a psychiatric condition that is very challenging to treat, with no first-line medications approved by any medical agency. It is defined as a recurrent compulsive habit of pulling out one's own hair, usually from the scalp and eyebrows area, but it can also affect eyelashes or any other hair-bearing area. N-acetylcysteine, a glutamate modulator, has been studied as a possible treatment for several psychiatric and neurological disorders, considering its role in attenuating pathophysiological processes responsible for compulsive behaviors and, therefore, trichotillomania. Objective: This study aims to determine the efficacy of N-acetylcysteine in the treatment of adults with trichotillomania. Methodology: The authors researched guidelines, standards of clinical guidance, systematic reviews, meta-analyses, and randomized clinical trials, published in the last 20 years using the MeSH terms: "Trichotillomania” and “N-acetylcysteine” in the following databases: PubMed, Cochrane library, National Guideline Clearing House, National Institute of Health and Care Excellence (NICE), Canadian Medical Association Practice Guidelines and Database of Abstracts of Reviews of Effectiveness (DARE). The Strength of Recommendation Taxonomy (SORT) Scale, from the American Family Physician, was used to evaluate the level of evidence and assign the strength of recommendation. Results: The research found fifteen articles, among which only three were eligible according to the inclusion criteria: 1. systematic review and 2. meta-analyses. There was evidence of a probable beneficial effect of N-acetylcysteine on treatment response and reduction of trichotillomania symptom severity in adults, with moderate certainty in the effect estimate. There was no evidence of effectiveness with the use of inositol, antioxidants, naltrexone, or selective serotonin reuptake inhibitors (SSRIs) in the treatment of adults with trichotillomania. Clomipramine and Olanzapine showed potential treatment benefits, with low certainty. N-acetylcysteine had the least severe side effect profile in adults compared with the other potentially beneficial pharmacological treatments. Conclusion: Evidence points towards the effectiveness of N-acetylcysteine in the treatment of adults with trichotillomania, which exhibits a good tolerability profile with minimal adverse effects. Therefore, the authors attribute a level of evidence 2, the strength of recommendation B, to the prescription of N-acetylcysteine in the treatment of adults suffering from trichotillomania (SORT analysis). Further investigation is needed in order to extract high-quality conclusions from the meta-analysis.

Keywords: trichotillomania, hair pulling, treatment, n-acetylcysteine

Procedia PDF Downloads 102
1549 Bioinformatic Strategies for the Production of Glycoproteins in Algae

Authors: Fadi Saleh, Çığdem Sezer Zhmurov

Abstract:

Biopharmaceuticals represent one of the wildest developing fields within biotechnology, and the biological macromolecules being produced inside cells have a variety of applications for therapies. In the past, mammalian cells, especially CHO cells, have been employed in the production of biopharmaceuticals. This is because these cells can achieve human-like completion of PTM. These systems, however, carry apparent disadvantages like high production costs, vulnerability to contamination, and limitations in scalability. This research is focused on the utilization of microalgae as a bioreactor system for the synthesis of biopharmaceutical glycoproteins in relation to PTMs, particularly N-glycosylation. The research points to a growing interest in microalgae as a potential substitute for more conventional expression systems. A number of advantages exist in the use of microalgae, including rapid growth rates, the lack of common human pathogens, controlled scalability in bioreactors, and the ability of some PTMs to take place. Thus, the potential of microalgae to produce recombinant proteins with favorable characteristics makes this a promising platform in order to produce biopharmaceuticals. The study focuses on the examination of the N-glycosylation pathways across different species of microalgae. This investigation is important as N-glycosylation—the process by which carbohydrate groups are linked to proteins—profoundly influences the stability, activity, and general performance of glycoproteins. Additionally, bioinformatics methodologies are employed to explain the genetic pathways implicated in N-glycosylation within microalgae, with the intention of modifying these organisms to produce glycoproteins suitable for human consumption. In this way, the present comparative analysis of the N-glycosylation pathway in humans and microalgae can be used to bridge both systems in order to produce biopharmaceuticals with humanized glycosylation profiles within the microalgal organisms. The results of the research underline microalgae's potential to help improve some of the limitations associated with traditional biopharmaceutical production systems. The study may help in the creation of a cost-effective and scale-up means of producing quality biopharmaceuticals by modifying microalgae genetically to produce glycoproteins with N-glycosylation that is compatible with humans. Improvements in effectiveness will benefit biopharmaceutical production and the biopharmaceutical sector with this novel, green, and efficient expression platform. This thesis, therefore, is thorough research into the viability of microalgae as an efficient platform for producing biopharmaceutical glycoproteins. Based on the in-depth bioinformatic analysis of microalgal N-glycosylation pathways, a platform for their engineering to produce human-compatible glycoproteins is set out in this work. The findings obtained in this research will have significant implications for the biopharmaceutical industry by opening up a new way of developing safer, more efficient, and economically more feasible biopharmaceutical manufacturing platforms.

Keywords: microalgae, glycoproteins, post-translational modification, genome

Procedia PDF Downloads 24
1548 Trauma inside and Out: A Descriptive Cross-Sectional Study of Family, Community and Psychological Wellbeing amongst Pediatric Victims of Interpersonal Violence

Authors: Mary Bernardin, Margie Batek, Joseph Moen, David Schnadower

Abstract:

Background: Exposure to violence not only has negative psychological impact on children but is a risk factor for children becoming recurrent victims of violence. However, little is known regarding the degree to which child victims of violence are exposed to trauma at home and in their community, or its association with specific psychological diagnoses. Objective: The aims of this study were to perform in-depth characterizations of family, community and psychological wellness amongst pediatric victims of interpersonal violence. Methods: As standard of care at the Saint Louis Children’s Hospital pediatric emergency department (ED), social workers perform in-depth interviews with all children presenting due to violent interpersonal encounters. In this retrospective cross-sectional study, we collected data from social work interviews on family structure, exposure to violence in the community and the home, as well as history of psychological diagnoses amongst children ages 8-19 years who presented to the ED for injuries related to interpersonal violence from 2014-2017. Results: A total of 407 patients presenting to the ED for an interpersonal violent encounter were analyzed. The average age of studied youths was 14.7 years (SD 2.5). Youths were 97.5% African American ethnicity and 66.6% male. 67.8% described their home having a nonnuclear family structure, 50% of which reported living with a single mother. Of the 21% who reported having incarcerated family members, 56.3% reported their father being incarcerated, 15% reported their mother being incarcerated, and 12.5% reported multiple family members being incarcerated. 11.3% reported witnessing domestic violence in their home. 12.8% of youths reported some form of child abuse. The type of child abuse was not specified in 29.3% of cases, but physical abuse (32.8%) followed by sexual abuse (22.4%) were the most commonly reported. 14.5% had history of placement in foster care and/or adoption. 64% reported having witnessed violence in their community. 30.2% reported having lost friends or family due to violence, and of those, 26.4% reported the loss of a cousin, 18.9% the loss of a friend, 16% the loss of their father, and 12.3% the loss of their brother due to violence. Of the 22.4% youths with psychiatric diagnose(s), 48.4% had multiple diagnoses, the most common of which were ADD/ADHD (62.6%), followed by depression (31.9%), bipolar disorder (27.5%) and anxiety (15.4%). Conclusions: A remarkable proportion of children presenting to EDs due to interpersonal violence have a history of exposure to instability and violence in their homes and communities. Additionally, psychological diagnoses are frequent among pediatric victims of violence. More research is needed to better understand the association between trauma exposure, psychological health and violent victimization amongst children.

Keywords: community violence, emergency department, pediatric interpersonal violence, pediatric trauma, psychological effects of trauma

Procedia PDF Downloads 236
1547 Autophagy in the Midgut Epithelium of Spodoptera exigua Hübner (Lepidoptera: Noctuidae) Larvae Exposed to Various Cadmium Concentration - 6-Generational Exposure

Authors: Magdalena Maria Rost-Roszkowska, Alina Chachulska-Żymełka, Monika Tarnawska, Maria Augustyniak, Alina Kafel, Agnieszka Babczyńska

Abstract:

Autophagy is a form of cell remodeling in which an internalization of organelles into vacuoles that are called autophagosomes occur. Autophagosomes are the targets of lysosomes, thus causing digestion of cytoplasmic components. Eventually, it can lead to the death of the entire cell. However, in response to several stress factors, e.g., starvation, heavy metals (e.g., cadmium) autophagy can also act as a pro-survival factor, protecting the cell against its death. The main aim of our studies was to check if the process of autophagy, which could appear in the midgut epithelium after Cd treatment, can be fixed during the following generations of insects. As a model animal, we chose the beet armyworm Spodoptera exigua Hübner (Lepidoptera: Noctuidae), a well-known polyphagous pest of many vegetable crops. We analyzed specimens at final larval stage (5th larval stage), due to its hyperfagy, resulting in great amount of cadmium assimilate. The culture consisted of two strains: a control strain (K) fed a standard diet, and a cadmium strain (Cd), fed on standard diet supplemented with cadmium (44 mg Cd per kg of dry weight of food) for 146 generations, both strains. In addition, the control insects were transferred to the Cd supplemented diet (5 mg Cd per kg of dry weight of food, 10 mg Cd per kg of dry weight of food, 20 mg Cd per kg of dry weight of food, 44 mg Cd per kg of dry weight of food). Therefore, we obtained Cd1, Cd2, Cd3 and KCd experimental groups. Autophagy has been examined using transmission electron microscope. During this process, degenerated organelles were surrounded by a membranous phagophore and enclosed in an autophagosome. Eventually, after the autophagosome fused with a lysosome, an autolysosome was formed and the process of the digestion of organelles began. During the 1st year of the experiment, we analyzed specimens of 6 generations in all the lines. The intensity of autophagy depends significantly on the generation, tissue and cadmium concentration in the insect rearing medium. In the Ist, IInd, IIIrd, IVth, Vth and VIth generation the intensity of autophagy in the midguts from cadmium-exposed strains decreased gradually according to the following order of strains: Cd1, Cd2, Cd3 and KCd. The higher amount of cells with autophagy was observed in Cd1 and Cd2. However, it was still higher than the percentage of cells with autophagy in the same tissues of the insects from the control and multigenerational cadmium strain. This may indicate that during 6-generational exposure to various Cd concentration, a preserved tolerance to cadmium was not maintained. The study has been financed by the National Science Centre Poland, grant no 2016/21/B/NZ8/00831.

Keywords: autophagy, cell death, digestive system, ultrastructure

Procedia PDF Downloads 233
1546 Architectural Design as Knowledge Production: A Comparative Science and Technology Study of Design Teaching and Research at Different Architecture Schools

Authors: Kim Norgaard Helmersen, Jan Silberberger

Abstract:

Questions of style and reproducibility in relation to architectural design are not only continuously debated; the very concepts can seem quite provocative to architects, who like to think of architectural design as depending on intuition, ideas, and individual personalities. This standpoint - dominant in architectural discourse - is challenged in the present paper presenting early findings from a comparative STS-inspired research study of architectural design teaching and research at different architecture schools in varying national contexts. In philosophy of science framework, the paper reflects empirical observations of design teaching at the Royal Academy of Fine Arts in Copenhagen and presents a tentative theoretical framework for the on-going research project. The framework suggests that architecture – as a field of knowledge production – is mainly dominated by three epistemological positions, which will be presented and discussed. Besides serving as a loosely structured framework for future data analysis, the proposed framework brings forth the argument that architecture can be roughly divided into different schools of thought, like the traditional science disciplines. Without reducing the complexity of the discipline, describing its main intellectual positions should prove fruitful for the future development of architecture as a theoretical discipline, moving an architectural critique beyond discussions of taste preferences. Unlike traditional science disciplines, there is a lack of a community-wide, shared pool of codified references in architecture, with architects instead referencing art projects, buildings, and famous architects, when positioning their standpoints. While these inscriptions work as an architectural reference system, to be compared to codified theories in academic writing of traditional research, they are not used systematically in the same way. As a result, architectural critique is often reduced to discussions of taste and subjectivity rather than epistemological positioning. Architects are often criticized as judges of taste and accused that their rationality is rooted in cultural-relative aesthetical concepts of taste closely linked to questions of style, but arguably their supposedly subjective reasoning, in fact, forms part of larger systems of thought. Putting architectural ‘styles’ under a loop, and tracing their philosophical roots, can potentially open up a black box in architectural theory. Besides ascertaining and recognizing the existence of specific ‘styles’ and thereby schools of thought in current architectural discourse, the study could potentially also point at some mutations of the conventional – something actually ‘new’ – of potentially high value for architectural design education.

Keywords: architectural theory, design research, science and technology studies (STS), sociology of architecture

Procedia PDF Downloads 130
1545 Pulsed-Wave Doppler Ultrasonographic Assessment of the Maximum Blood Velocity in Common Carotid Artery in Horses after Administration of Ketamine and Acepromazine

Authors: Saman Ahani, Aboozar Dehghan, Roham Vali, Hamid Salehian, Amin Ebrahimi

Abstract:

Pulsed-wave (PW) doppler ultrasonography is a non-invasive, relatively accurate imaging technique that can measure blood speed. The imaging could be obtained via the common carotid artery, as one of the main vessels supplying the blood of vital organs. In horses, factors such as susceptibility to depression of the cardiovascular system and their large muscular mass have rendered them vulnerable to changes in blood speed. One of the most important factors causing blood velocity changes is the administration of anesthetic drugs, including Ketamine and Acepromazine. Thus, in this study, the Pulsed-wave doppler technique was performed to assess the highest blood velocity in the common carotid artery following administration of Ketamine and Acepromazine. Six male and six female healthy Kurdish horses weighing 351 ± 46 kg (mean ± SD) and aged 9.2 ± 1.7 years (mean ± SD) were housed under animal welfare guidelines. After fasting for six hours, the normal blood flow velocity in the common carotid artery was measured using a Pulsed-wave doppler ultrasonography machine (BK Medical, Denmark), and a high-frequency linear transducer (12 MHz) without applying any sedative drugs as a control group. The same procedure was repeated after each individual received the following medications: 1.1, 2.2 mg/kg Ketamine (Pfizer, USA), and 0.5, 1 mg/kg Acepromizine (RACEHORSE MEDS, Ukraine), with an interval of 21 days between the administration of each dose and/or drug. The ultrasonographic study was done five (T5) and fifteen (T15) minutes after injecting each dose intravenously. Lastly, the statistical analysis was performed using SPSS software version 22 for Windows and a P value less than 0.05 was considered to be statistically significant. Five minutes after administration of Ketamine (1.1, 2.2 mg/kg) in both male and female horses, the blood velocity decreased to 38.44, 34.53 cm/s in males, and 39.06, 34.10 cm/s in females in comparison to the control group (39.59 and 40.39 cm/s in males and females respectively) while administration of 0.5 mg/kg Acepromazine led to a significant rise (73.15 and 55.80 cm/s in males and females respectively) (p<0.05). It means that the most drastic change in blood velocity, regardless of gender, refers to the latter dose/drug. In both medications and both genders, the increase in doses led to a decrease in blood velocity compared to the lower dose of the same drug. In all experiments in this study, the blood velocity approached its normal value at T15. In another study comparing the blood velocity changes affected by Ketamine and Acepromazine through femoral arteries, the most drastic changes were attributed to Ketamine; however, in this experiment, the maximum blood velocity was observed following administration of Acepromazine via the common carotid artery. Therefore, further experiments using the same medications are suggested using Pulsed-wave doppler measuring the blood velocity changes in both femoral and common carotid arteries simultaneously.

Keywords: Acepromazine, common carotid artery, horse, ketamine, pulsed-wave doppler ultrasonography

Procedia PDF Downloads 128
1544 Nonlinear Response of Tall Reinforced Concrete Shear Wall Buildings under Wind Loads

Authors: Mahtab Abdollahi Sarvi, Siamak Epackachi, Ali Imanpour

Abstract:

Reinforced concrete shear walls are commonly used as the lateral load-resisting system of mid- to high-rise office or residential buildings around the world. Design of such systems is often governed by wind rather than seismic effects, in particular in low-to-moderate seismic regions. The current design philosophy as per the majority of building codes under wind loads require elastic response of lateral load-resisting systems including reinforced concrete shear walls when subjected to the rare design wind load, resulting in significantly large wall sections needed to meet strength requirements and drift limits. The latter can highly influence the design in upper stories due to stringent drift limits specified by building codes, leading to substantial added costs to the construction of the wall. However, such walls may offer limited to moderate over-strength and ductility due to their large reserve capacity provided that they are designed and detailed to appropriately develop such over-strength and ductility under extreme wind loads. This would significantly contribute to reducing construction time and costs, while maintaining structural integrity under gravity and frequently-occurring and less frequent wind events. This paper aims to investigate the over-strength and ductility capacity of several imaginary office buildings located in Edmonton, Canada with a glance at earthquake design philosophy. Selected models are 10- to 25-story buildings with three types of reinforced concrete shear wall configurations including rectangular, barbell, and flanged. The buildings are designed according to National Building Code of Canada. Then fiber-based numerical models of the walls are developed in Perform 3D and by conducting nonlinear static (pushover) analysis, lateral nonlinear behavior of the walls are evaluated. Ductility and over-strength of the structures are obtained based on the results of the pushover analyses. The results confirmed moderate nonlinear capacity of reinforced concrete shear walls under extreme wind loads. This is while lateral displacements of the walls pass the serviceability limit states defined in Pre standard for Performance-Based Wind Design (ASCE). The results indicate that we can benefit the limited nonlinear response observed in the reinforced concrete shear walls to economize the design of such systems under wind loads.

Keywords: concrete shear wall, high-rise buildings, nonlinear static analysis, response modification factor, wind load

Procedia PDF Downloads 107
1543 Comprehensive Analysis of Electrohysterography Signal Features in Term and Preterm Labor

Authors: Zhihui Liu, Dongmei Hao, Qian Qiu, Yang An, Lin Yang, Song Zhang, Yimin Yang, Xuwen Li, Dingchang Zheng

Abstract:

Premature birth, defined as birth before 37 completed weeks of gestation is a leading cause of neonatal morbidity and mortality and has long-term adverse consequences for health. It has recently been reported that the worldwide preterm birth rate is around 10%. The existing measurement techniques for diagnosing preterm delivery include tocodynamometer, ultrasound and fetal fibronectin. However, they are subjective, or suffer from high measurement variability and inaccurate diagnosis and prediction of preterm labor. Electrohysterography (EHG) method based on recording of uterine electrical activity by electrodes attached to maternal abdomen, is a promising method to assess uterine activity and diagnose preterm labor. The purpose of this study is to analyze the difference of EHG signal features between term labor and preterm labor. Free access database was used with 300 signals acquired in two groups of pregnant women who delivered at term (262 cases) and preterm (38 cases). Among them, EHG signals from 38 term labor and 38 preterm labor were preprocessed with band-pass Butterworth filters of 0.08–4Hz. Then, EHG signal features were extracted, which comprised classical time domain description including root mean square and zero-crossing number, spectral parameters including peak frequency, mean frequency and median frequency, wavelet packet coefficients, autoregression (AR) model coefficients, and nonlinear measures including maximal Lyapunov exponent, sample entropy and correlation dimension. Their statistical significance for recognition of two groups of recordings was provided. The results showed that mean frequency of preterm labor was significantly smaller than term labor (p < 0.05). 5 coefficients of AR model showed significant difference between term labor and preterm labor. The maximal Lyapunov exponent of early preterm (time of recording < the 26th week of gestation) was significantly smaller than early term. The sample entropy of late preterm (time of recording > the 26th week of gestation) was significantly smaller than late term. There was no significant difference for other features between the term labor and preterm labor groups. Any future work regarding classification should therefore focus on using multiple techniques, with the mean frequency, AR coefficients, maximal Lyapunov exponent and the sample entropy being among the prime candidates. Even if these methods are not yet useful for clinical practice, they do bring the most promising indicators for the preterm labor.

Keywords: electrohysterogram, feature, preterm labor, term labor

Procedia PDF Downloads 571
1542 Nurse-Reported Perceptions of Medication Safety in Private Hospitals in Gauteng Province.

Authors: Madre Paarlber, Alwiena Blignaut

Abstract:

Background: Medication administration errors remains a global patient safety problem targeted by the WHO (World Health Organization), yet research on this matter is sparce within the South African context. Objective: The aim was to explore and describe nurses’ (medication administrators) perceptions regarding medication administration safety-related culture, incidence, causes, and reporting in the Gauteng Province of South Africa, and to determine any relationships between perceived variables concerned with medication safety (safety culture, incidences, causes, reporting of incidences, and reasons for non-reporting). Method: A quantitative research design was used through which self-administered online surveys were sent to 768 nurses (medication administrators) (n=217). The response rate was 28.26%. The survey instrument was synthesised from the Agency of Healthcare Research and Quality (AHRQ) Hospital Survey on Patient Safety Culture, the Registered Nurse Forecasting (RN4CAST) survey, a survey list prepared from a systematic review aimed at generating a comprehensive list of medication administration error causes and the Medication Administration Error Reporting Survey from Wakefield. Exploratory and confirmatory factor analyses were used to determine the validity and reliability of the survey. Descriptive and inferential statistical data analysis were used to analyse quantitative data. Relationships and correlations were identified between items, subscales and biographic data by using Spearmans’ Rank correlations, T-Tests and ANOVAs (Analysis of Variance). Nurses reported on their perceptions of medication administration safety-related culture, incidence, causes, and reporting in the Gauteng Province. Results: Units’ teamwork deemed satisfactory, punitive responses to errors accentuated. “Crisis mode” working, concerns regarding mistake recording and long working hours disclosed as impacting patient safety. Overall medication safety graded mostly positively. Work overload, high patient-nurse ratios, and inadequate staffing implicated as error-inducing. Medication administration errors were reported regularly. Fear and administrative response to errors effected non-report. Non-report of errors’ reasons was affected by non-punitive safety culture. Conclusions: Medication administration safety improvement is contingent on fostering a non-punitive safety culture within units. Anonymous medication error reporting systems and auditing nurses’ workload are recommended in the quest of improved medication safety within Gauteng Province private hospitals.

Keywords: incidence, medication administration errors, medication safety, reporting, safety culture

Procedia PDF Downloads 54
1541 Material Use and Life Cycle GHG Emissions of Different Electrification Options for Long-Haul Trucks

Authors: Nafisa Mahbub, Hajo Ribberink

Abstract:

Electrification of long-haul trucks has been in discussion as a potential strategy to decarbonization. These trucks will require large batteries because of their weight and long daily driving distances. Around 245 million battery electric vehicles are predicted to be on the road by the year 2035. This huge increase in the number of electric vehicles (EVs) will require intensive mining operations for metals and other materials to manufacture millions of batteries for the EVs. These operations will add significant environmental burdens and there is a significant risk that the mining sector will not be able to meet the demand for battery materials, leading to higher prices. Since the battery is the most expensive component in the EVs, technologies that can enable electrification with smaller batteries sizes have substantial potential to reduce the material usage and associated environmental and cost burdens. One of these technologies is an ‘electrified road’ (eroad), where vehicles receive power while they are driving, for instance through an overhead catenary (OC) wire (like trolleybuses and electric trains), through wireless (inductive) chargers embedded in the road, or by connecting to an electrified rail in or on the road surface. This study assessed the total material use and associated life cycle GHG emissions of two types of eroads (overhead catenary and in-road wireless charging) for long-haul trucks in Canada and compared them to electrification using stationary plug-in fast charging. As different electrification technologies require different amounts of materials for charging infrastructure and for the truck batteries, the study included the contributions of both for the total material use. The study developed a bottom-up approach model comparing the three different charging scenarios – plug in fast chargers, overhead catenary and in-road wireless charging. The investigated materials for charging technology and batteries were copper (Cu), steel (Fe), aluminium (Al), and lithium (Li). For the plug-in fast charging technology, different charging scenarios ranging from overnight charging (350 kW) to megawatt (MW) charging (2 MW) were investigated. A 500 km of highway (1 lane of in-road charging per direction) was considered to estimate the material use for the overhead catenary and inductive charging technologies. The study considered trucks needing an 800 kWh battery under the plug-in charger scenario but only a 200 kWh battery for the OC and inductive charging scenarios. Results showed that overall the inductive charging scenario has the lowest material use followed by OC and plug-in charger scenarios respectively. The materials use for the OC and plug-in charger scenarios were 50-70% higher than for the inductive charging scenarios for the overall system including the charging infrastructure and battery. The life cycle GHG emissions from the construction and installation of the charging technology material were also investigated.

Keywords: charging technology, eroad, GHG emissions, material use, overhead catenary, plug in charger

Procedia PDF Downloads 51
1540 Hydrological Challenges and Solutions in the Nashik Region: A Multi Tracer and Geochemistry Approach to Groundwater Management

Authors: Gokul Prasad, Pennan Chinnasamy

Abstract:

The degradation of groundwater resources, attributed to factors such as excessive abstraction and contamination, has emerged as a global concern. This study delves into the stable isotopes of water) in a hard-rock aquifer situated in the Upper Godavari watershed, an agriculturally rich region in India underlain by Basalt. The higher groundwater draft (> 90%) poses significant risks; comprehending groundwater sources, flow patterns, and their environmental impacts is pivotal for researchers and water managers. The region has faced five droughts in the past 20 years; four are categorized as medium. The recharge rates are variable and show a very minimum contribution to groundwater. The rainfall pattern shows vast variability, with the region receiving seasonal monsoon rainfall for just four months and the rest of the year experiencing minimal rainfall. This research closely monitored monsoon precipitation inputs and examined spatial and temporal fluctuations in δ18O and δ2H in both groundwater and precipitation. By discerning individual recharge events during monsoons, it became possible to identify periods when evaporation led to groundwater quality deterioration, characterized by elevated salinity and stable isotope values in the return flow. The locally derived meteoric water line (LMWL) (δ2H = 6.72 * δ18O + 1.53, r² = 0.6) provided valuable insights into the groundwater system. The leftward shift of the Nashik LMWL in relation to the GMWL and LMWL indicated groundwater evaporation (-33 ‰), supported by spatial variations in electrical conductivity (EC) data. Groundwater in the eastern and northern watershed areas exhibited higher salinity > 3000uS/cm, expanding > 40% of the area compared to the western and southern regions due to geological disparities (alluvium vs basalt). The findings emphasize meteoric precipitation as the primary groundwater source in the watershed. However, spatial variations in isotope values and chemical constituents indicate other contributing factors, including evaporation, groundwater source type, and natural or anthropogenic (specifically agricultural and industrial) contaminants. Therefore, the study recommends focused hydro geochemistry and isotope analysis in areas with strong agricultural and industrial influence for the development of holistic groundwater management plans for protecting the groundwater aquifers' quantity and quality.

Keywords: groundwater quality, stable isotopes, salinity, groundwater management, hard-rock aquifer

Procedia PDF Downloads 47