Search results for: mitochondrial function
520 Security Issues in Long Term Evolution-Based Vehicle-To-Everything Communication Networks
Authors: Mujahid Muhammad, Paul Kearney, Adel Aneiba
Abstract:
The ability for vehicles to communicate with other vehicles (V2V), the physical (V2I) and network (V2N) infrastructures, pedestrians (V2P), etc. – collectively known as V2X (Vehicle to Everything) – will enable a broad and growing set of applications and services within the intelligent transport domain for improving road safety, alleviate traffic congestion and support autonomous driving. The telecommunication research and industry communities and standardization bodies (notably 3GPP) has finally approved in Release 14, cellular communications connectivity to support V2X communication (known as LTE – V2X). LTE – V2X system will combine simultaneous connectivity across existing LTE network infrastructures via LTE-Uu interface and direct device-to-device (D2D) communications. In order for V2X services to function effectively, a robust security mechanism is needed to ensure legal and safe interaction among authenticated V2X entities in the LTE-based V2X architecture. The characteristics of vehicular networks, and the nature of most V2X applications, which involve human safety makes it significant to protect V2X messages from attacks that can result in catastrophically wrong decisions/actions include ones affecting road safety. Attack vectors include impersonation attacks, modification, masquerading, replay, MiM attacks, and Sybil attacks. In this paper, we focus our attention on LTE-based V2X security and access control mechanisms. The current LTE-A security framework provides its own access authentication scheme, the AKA protocol for mutual authentication and other essential cryptographic operations between UEs and the network. V2N systems can leverage this protocol to achieve mutual authentication between vehicles and the mobile core network. However, this protocol experiences technical challenges, such as high signaling overhead, lack of synchronization, handover delay and potential control plane signaling overloads, as well as privacy preservation issues, which cannot satisfy the adequate security requirements for majority of LTE-based V2X services. This paper examines these challenges and points to possible ways by which they can be addressed. One possible solution, is the implementation of the distributed peer-to-peer LTE security mechanism based on the Bitcoin/Namecoin framework, to allow for security operations with minimal overhead cost, which is desirable for V2X services. The proposed architecture can ensure fast, secure and robust V2X services under LTE network while meeting V2X security requirements.Keywords: authentication, long term evolution, security, vehicle-to-everything
Procedia PDF Downloads 167519 Serum Sickness-Like Reaction to D-Mannose Supplement
Authors: Emma Plante, Charles Ekwunwa, Diego Illanes
Abstract:
Introduction: Serum Sickness-Like Reaction (SSLR) is an inflammatory immune response characterized by a rash, polyarthralgias, and fever. SSLR usually occurs in response to a new medication (most commonly antibiotics, anticonvulsants, or antiinflammatory agents) and is believed to involve the formation of drug-specific immune complexes. Here we present a case of a 16-year-old female patient who developed an SSLR in response to the D-mannose-containing over-the-counter supplement, Uqora, used to promote bladder health. Methodology: The methodology for this study included a thorough literature search for other cases of SSLR associated with D-Mannose containing products. Data collection was performed through a review of the patient’s medical record, including history, physical examination, relevant laboratory results, and treatment plan. Findings: A 16-year-old female with a history of overactive bladder and anemia presented with a diffuse urticarial rash, headaches, joint pain, and swelling for three days. Her medications included oral contraceptive pills, iron, mirabegron, UQora, and a probiotic. Physical examination revealed a diffuse urticarial rash, and her musculoskeletal exam revealed swelling and tenderness in her wrists. Her CBC, basic metabolic panel, liver function panel, lyme titers, and urinalysis were all within normal limits. The patient was referred to an allergist, who diagnosed her with SSLR. All medications were discontinued, and she was treated with a 7-day course of prednisone and cetirizine. Her symptoms resolved, and her medications were slowly resumed sequentially over several months. However, UQora triggered a recurrence of her symptoms, and it was identified as the culprit medication. Consequently, UQora was permanently discontinued, and the patient has remained symptom-free. Conclusion: This case report describes the first documented case of SSLR caused by UQora (active ingredient D-mannose). D-Mannose is a monosaccharide found in many plants and fruits, and it is commonly used to prevent urinary tract infections. While the clinical features and timeline, in this case, were typical of SSLR, UQora as the trigger was highly unusual. Clinicians should be aware of the diverse triggers of SSLR and the importance of prompt identification and management to enhance patient safety. It is possible D-mannose was not the trigger, and further research is necessary to better understand the potential therapeutic applications of D-mannose, as well as the potential risks and interactions.Keywords: serum sickness-like reaction, d-mannose, hypersensitivity reaction, urticaria
Procedia PDF Downloads 94518 The Markers -mm and dämmo in Amharic: Developmental Approach
Authors: Hayat Omar
Abstract:
Languages provide speakers with a wide range of linguistic units to organize and deliver information. There are several ways to verbally express the mental representations of events. According to the linguistic tools they have acquired, speakers select the one that brings out the most communicative effect to convey their message. Our study focuses on two markers, -mm and dämmo, in Amharic (Ethiopian Semitic language). Our aim is to examine, from a developmental perspective, how they are used by speakers. We seek to distinguish the communicative and pragmatic functions indicated by means of these markers. To do so, we created a corpus of sixty narrative productions of children from 5-6, 7-8 to 10-12 years old and adult Amharic speakers. The experimental material we used to collect our data is a series of pictures without text 'Frog, Where are you?'. Although -mm and dämmo are each used in specific contexts, they are sometimes analyzed as being interchangeable. The suffix -mm is complex and multifunctional. It marks the end of the negative verbal structure, it is found in the relative structure of the imperfect, it creates new words such as adverbials or pronouns, it also serves to coordinate words, sentences and to mark the link between macro-propositions within a larger textual unit. -mm was analyzed as marker of insistence, topic shift marker, element of concatenation, contrastive focus marker, 'bisyndetic' coordinator. On the other hand, dämmo has limited function and did not attract the attention of many authors. The only approach we could find analyzes it in terms of 'monosyndetic' coordinator. The paralleling of these two elements made it possible to understand their distinctive functions and refine their description. When it comes to marking a referent, the choice of -mm or dämmo is not neutral, depending on whether the tagged argument is newly introduced, maintained, promoted or reintroduced. The presence of these morphemes explains the inter-phrastic link. The information is seized by anaphora or presupposition: -mm goes upstream while dämmo arrows downstream, the latter requires new information. The speaker uses -mm or dämmo according to what he assumes to be known to his interlocutors. The results show that -mm and dämmo, although all the speakers use them both, do not always have the same scope according to the speaker and vary according to the age. dämmo is mainly used to mark a contrastive topic to signal the concomitance of events. It is more commonly used in young children’s narratives (F(3,56) = 3,82, p < .01). Some values of -mm (additive) are acquired very early while others are rather late and increase with age (F(3,56) = 3,2, p < .03). The difficulty is due not only because of its synthetic structure but primarily because it is multi-purpose and requires a memory work. It highlights the constituent on which it operates to clarify how the message should be interpreted.Keywords: acquisition, cohesion, connection, contrastive topic, contrastive focus, discourse marker, pragmatics
Procedia PDF Downloads 134517 Real-Time Quantitative Polymerase Chain Reaction Assay for the Detection of microRNAs Using Bi-Directional Extension Sequences
Authors: Kyung Jin Kim, Jiwon Kwak, Jae-Hoon Lee, Soo Suk Lee
Abstract:
MicroRNAs (miRNA) are a class of endogenous, single-stranded, small, and non-protein coding RNA molecules typically 20-25 nucleotides long. They are thought to regulate the expression of other genes in a broad range by binding to 3’- untranslated regions (3’-UTRs) of specific mRNAs. The detection of miRNAs is very important for understanding of the function of these molecules and in the diagnosis of variety of human diseases. However, detection of miRNAs is very challenging because of their short length and high sequence similarities within miRNA families. So, a simple-to-use, low-cost, and highly sensitive method for the detection of miRNAs is desirable. In this study, we demonstrate a novel bi-directional extension (BDE) assay. In the first step, a specific linear RT primer is hybridized to 6-10 base pairs from the 3’-end of a target miRNA molecule and then reverse transcribed to generate a cDNA strand. After reverse transcription, the cDNA was hybridized to the 3’-end which is BDE sequence; it played role as the PCR template. The PCR template was amplified in an SYBR green-based quantitative real-time PCR. To prove the concept, we used human brain total RNA. It could be detected quantitatively in the range of seven orders of magnitude with excellent linearity and reproducibility. To evaluate the performance of BDE assay, we contrasted sensitivity and specificity of the BDE assay against a commercially available poly (A) tailing method using miRNAs for let-7e extracted from A549 human epithelial lung cancer cells. The BDE assay displayed good performance compared with a poly (A) tailing method in terms of specificity and sensitivity; the CT values differed by 2.5 and the melting curve showed a sharper than poly (A) tailing methods. We have demonstrated an innovative, cost-effective BDE assay that allows improved sensitivity and specificity in detection of miRNAs. Dynamic range of the SYBR green-based RT-qPCR for miR-145 could be represented quantitatively over a range of 7 orders of magnitude from 0.1 pg to 1.0 μg of human brain total RNA. Finally, the BDE assay for detection of miRNA species such as let-7e shows good performance compared with a poly (A) tailing method in terms of specificity and sensitivity. Thus BDE proves a simple, low cost, and highly sensitive assay for various miRNAs and should provide significant contributions in research on miRNA biology and application of disease diagnostics with miRNAs as targets.Keywords: bi-directional extension (BDE), microRNA (miRNA), poly (A) tailing assay, reverse transcription, RT-qPCR
Procedia PDF Downloads 166516 Experimental Investigation of Hydrogen Addition in the Intake Air of Compressed Engines Running on Biodiesel Blend
Authors: Hendrick Maxil Zárate Rocha, Ricardo da Silva Pereira, Manoel Fernandes Martins Nogueira, Carlos R. Pereira Belchior, Maria Emilia de Lima Tostes
Abstract:
This study investigates experimentally the effects of hydrogen addition in the intake manifold of a diesel generator operating with a 7% biodiesel-diesel oil blend (B7). An experimental apparatus setup was used to conduct performance and emissions tests in a single cylinder, air cooled diesel engine. This setup consisted of a generator set connected to a wirewound resistor load bank that was used to vary engine load. In addition, a flowmeter was used to determine hydrogen volumetric flowrate and a digital anemometer coupled with an air box to measure air flowrate. Furthermore, a digital precision electronic scale was used to measure engine fuel consumption and a gas analyzer was used to determine exhaust gas composition and exhaust gas temperature. A thermopar was installed near the exhaust collection to measure cylinder temperature. In-cylinder pressure was measured using an AVL Indumicro data acquisition system with a piezoelectric pressure sensor. An AVL optical encoder was installed in the crankshaft and synchronized with in-cylinder pressure in real time. The experimental procedure consisted of injecting hydrogen into the engine intake manifold at different mass concentrations of 2,6,8 and 10% of total fuel mass (B7 + hydrogen), which represented energy fractions of 5,15, 20 and 24% of total fuel energy respectively. Due to hydrogen addition, the total amount of fuel energy introduced increased and the generators fuel injection governor prevented any increases of engine speed. Several conclusions can be stated from the test results. A reduction in specific fuel consumption as a function of hydrogen concentration increase was noted. Likewise, carbon dioxide emissions (CO2), carbon monoxide (CO) and unburned hydrocarbons (HC) decreased as hydrogen concentration increased. On the other hand, nitrogen oxides emissions (NOx) increased due to average temperatures inside the cylinder being higher. There was also an increase in peak cylinder pressure and heat release rate inside the cylinder, since the fuel ignition delay was smaller due to hydrogen content increase. All this indicates that hydrogen promotes faster combustion and higher heat release rates and can be an important additive to all kind of fuels used in diesel generators.Keywords: diesel engine, hydrogen, dual fuel, combustion analysis, performance, emissions
Procedia PDF Downloads 350515 Evaluation of Sequential Polymer Flooding in Multi-Layered Heterogeneous Reservoir
Authors: Panupong Lohrattanarungrot, Falan Srisuriyachai
Abstract:
Polymer flooding is a well-known technique used for controlling mobility ratio in heterogeneous reservoirs, leading to improvement of sweep efficiency as well as wellbore profile. However, low injectivity of viscous polymer solution attenuates oil recovery rate and consecutively adds extra operating cost. An attempt of this study is to improve injectivity of polymer solution while maintaining recovery factor, enhancing effectiveness of polymer flooding method. This study is performed by using reservoir simulation program to modify conventional single polymer slug into sequential polymer flooding, emphasizing on increasing of injectivity and also reduction of polymer amount. Selection of operating conditions for single slug polymer including pre-injected water, polymer concentration and polymer slug size is firstly performed for a layered-heterogeneous reservoir with Lorenz coefficient (Lk) of 0.32. A selected single slug polymer flooding scheme is modified into sequential polymer flooding with reduction of polymer concentration in two different modes: Constant polymer mass and reduction of polymer mass. Effects of Residual Resistance Factor (RRF) is also evaluated. From simulation results, it is observed that first polymer slug with the highest concentration has the main function to buffer between displacing phase and reservoir oil. Moreover, part of polymer from this slug is also sacrificed for adsorption. Reduction of polymer concentration in the following slug prevents bypassing due to unfavorable mobility ratio. At the same time, following slugs with lower viscosity can be injected easily through formation, improving injectivity of the whole process. A sequential polymer flooding with reduction of polymer mass shows great benefit by reducing total production time and amount of polymer consumed up to 10% without any downside effect. The only advantage of using constant polymer mass is slightly increment of recovery factor (up to 1.4%) while total production time is almost the same. Increasing of residual resistance factor of polymer solution yields a benefit on mobility control by reducing effective permeability to water. Nevertheless, higher adsorption results in low injectivity, extending total production time. Modifying single polymer slug into sequence of reduced polymer concentration yields major benefits on reducing production time as well as polymer mass. With certain design of polymer flooding scheme, recovery factor can even be further increased. This study shows that application of sequential polymer flooding can be certainly applied to reservoir with high value of heterogeneity since it requires nothing complex for real implementation but just a proper design of polymer slug size and concentration.Keywords: polymer flooding, sequential, heterogeneous reservoir, residual resistance factor
Procedia PDF Downloads 476514 Development of a Multi-Variate Model for Matching Plant Nitrogen Requirements with Supply for Reducing Losses in Dairy Systems
Authors: Iris Vogeler, Rogerio Cichota, Armin Werner
Abstract:
Dairy farms are under pressure to increase productivity while reducing environmental impacts. Effective fertiliser management practices are critical to achieve this. Determination of optimum nitrogen (N) fertilisation rates which maximise pasture growth and minimise N losses is challenging due to variability in plant requirements and likely near-future supply of N by the soil. Remote sensing can be used for mapping N nutrition status of plants and to rapidly assess the spatial variability within a field. An algorithm is, however, lacking which relates the N status of the plants to the expected yield response to additions of N. The aim of this simulation study was to develop a multi-variate model for determining N fertilisation rate for a target percentage of the maximum achievable yield based on the pasture N concentration (ii) use of an algorithm for guiding fertilisation rates, and (iii) evaluation of the model regarding pasture yield and N losses, including N leaching, denitrification and volatilisation. A simulation study was carried out using the Agricultural Production Systems Simulator (APSIM). The simulations were done for an irrigated ryegrass pasture in the Canterbury region of New Zealand. A multi-variate model was developed and used to determine monthly required N fertilisation rates based on pasture N content prior to fertilisation and targets of 50, 75, 90 and 100% of the potential monthly yield. These monthly optimised fertilisation rules were evaluated by running APSIM for a ten-year period to provide yield and N loss estimates from both nonurine and urine affected areas. Comparison with typical fertilisation rates of 150 and 400 kg N/ha/year was also done. Assessment of pasture yield and leaching from fertiliser and urine patches indicated a large reduction in N losses when N fertilisation rates were controlled by the multi-variate model. However, the reduction in leaching losses was much smaller when taking into account the effects of urine patches. The proposed approach based on biophysical modelling to develop a multi-variate model for determining optimum N fertilisation rates dependent on pasture N content is very promising. Further analysis, under different environmental conditions and validation is required before the approach can be used to help adjust fertiliser management practices to temporal and spatial N demand based on the nitrogen status of the pasture.Keywords: APSIM modelling, optimum N fertilization rate, pasture N content, ryegrass pasture, three dimensional surface response function.
Procedia PDF Downloads 130513 The Effects of Computer Game-Based Pedagogy on Graduate Students Statistics Performance
Authors: Clement Yeboah, Eva Laryea
Abstract:
A pretest-posttest within subjects experimental design was employed to examine the effects of a computerized basic statistics learning game on achievement and statistics-related anxiety of students enrolled in introductory graduate statistics course. Participants (N = 34) were graduate students in a variety of programs at state-funded research university in the Southeast United States. We analyzed pre-test posttest differences using paired samples t-tests for achievement and for statistics anxiety. The results of the t-test for knowledge in statistics were found to be statistically significant, indicating significant mean gains for statistical knowledge as a function of the game-based intervention. Likewise, the results of the t-test for statistics-related anxiety were also statistically significant, indicating a decrease in anxiety from pretest to posttest. The implications of the present study are significant for both teachers and students. For teachers, using computer games developed by the researchers can help to create a more dynamic and engaging classroom environment, as well as improve student learning outcomes. For students, playing these educational games can help to develop important skills such as problem solving, critical thinking, and collaboration. Students can develop an interest in the subject matter and spend quality time to learn the course as they play the game without knowing that they are even learning the presupposed hard course. The future directions of the present study are promising as technology continues to advance and become more widely available. Some potential future developments include the integration of virtual and augmented reality into educational games, the use of machine learning and artificial intelligence to create personalized learning experiences, and the development of new and innovative game-based assessment tools. It is also important to consider the ethical implications of computer game-based pedagogy, such as the potential for games to perpetuate harmful stereotypes and biases. As the field continues to evolve, it will be crucial to address these issues and work towards creating inclusive and equitable learning experiences for all students. This study has the potential to revolutionize the way basic statistics graduate students learn and offers exciting opportunities for future development and research. It is an important area of inquiry for educators, researchers, and policymakers and will continue to be a dynamic and rapidly evolving field for years to come.Keywords: pretest-posttest within subjects, computer game-based learning, statistics achievement, statistics anxiety
Procedia PDF Downloads 77512 Informed Urban Design: Minimizing Urban Heat Island Intensity via Stochastic Optimization
Authors: Luis Guilherme Resende Santos, Ido Nevat, Leslie Norford
Abstract:
The Urban Heat Island (UHI) is characterized by increased air temperatures in urban areas compared to undeveloped rural surrounding environments. With urbanization and densification, the intensity of UHI increases, bringing negative impacts on livability, health and economy. In order to reduce those effects, it is required to take into consideration design factors when planning future developments. Given design constraints such as population size and availability of area for development, non-trivial decisions regarding the buildings’ dimensions and their spatial distribution are required. We develop a framework for optimization of urban design in order to jointly minimize UHI intensity and buildings’ energy consumption. First, the design constraints are defined according to spatial and population limits in order to establish realistic boundaries that would be applicable in real life decisions. Second, the tools Urban Weather Generator (UWG) and EnergyPlus are used to generate outputs of UHI intensity and total buildings’ energy consumption, respectively. Those outputs are changed based on a set of variable inputs related to urban morphology aspects, such as building height, urban canyon width and population density. Lastly, an optimization problem is cast where the utility function quantifies the performance of each design candidate (e.g. minimizing a linear combination of UHI and energy consumption), and a set of constraints to be met is set. Solving this optimization problem is difficult, since there is no simple analytic form which represents the UWG and EnergyPlus models. We therefore cannot use any direct optimization techniques, but instead, develop an indirect “black box” optimization algorithm. To this end we develop a solution that is based on stochastic optimization method, known as the Cross Entropy method (CEM). The CEM translates the deterministic optimization problem into an associated stochastic optimization problem which is simple to solve analytically. We illustrate our model on a typical residential area in Singapore. Due to fast growth in population and built area and land availability generated by land reclamation, urban planning decisions are of the most importance for the country. Furthermore, the hot and humid climate in the country raises the concern for the impact of UHI. The problem presented is highly relevant to early urban design stages and the objective of such framework is to guide decision makers and assist them to include and evaluate urban microclimate and energy aspects in the process of urban planning.Keywords: building energy consumption, stochastic optimization, urban design, urban heat island, urban weather generator
Procedia PDF Downloads 131511 Leptospira Lipl32-Specific Antibodies: Therapeutic Property, Epitopes Characterization and Molecular Mechanisms of Neutralization
Authors: Santi Maneewatchararangsri, Wanpen Chaicumpa, Patcharin Saengjaruk, Urai Chaisri
Abstract:
Leptospirosis is a globally neglected disease that continues to be a significant public health and veterinary burden, with millions of cases reported each year. Early and accurate differential diagnosis of leptospirosis from other febrile illnesses and the development of a broad spectrum of leptospirosis vaccines are needed. The LipL32 outer membrane lipoprotein is a member of Leptospira adhesive matrices and has been found to exert hemolytic activity to erythrocytes in vitro. Therefore, LipL32 is regarded as a potential target for diagnosis, broad-spectrum leptospirosis vaccines, and for passive immunotherapy. In this study, we established LipL32-specific mouse monoclonal antibodies, mAbLPF1 and mAbLPF2, and their respective mouse- and humanized-engineered single chain variable fragment (ScFv). Their antibodies’ neutralizing activities against Leptospira-mediated hemolysis in vitro, and the therapeutic efficacy of mAbs against heterologous Leptospira infected hamsters were demonstrated. The epitope peptide of mAb LPF1 was mapped to a non-contiguous carboxy-terminal β-turn and amphipathic α-helix of LipL32 structure contributing to phospholipid/host cell adhesion and membrane insertion. We found that the mAbLPF2 epitope was located on the interacting loop of peptide binding groove of the LipL32 molecule responsible for interactions with host constituents. Epitope sequences are highly conserved among Leptospira spp. and are absent from the LipL32 superfamily of other microorganisms. Both epitopes are surface-exposed, readily accessible by mAbs, and immunogenic. However, they are less dominant when revealed by LipL32-specific immunoglobulins from leptospirosis-patient sera and rabbit hyperimmune serum raised by whole Leptospira. Our study also demonstrated an adhesion inhibitory activity of LipL32 protein to host membrane components and cells mediated by mAbs as well as an anti-hemolytic activity of the respective antibodies. The therapeutic antibodies, particularly the humanized-ScFv, have a potential for further development as non-drug therapeutic agent for human leptospirosis, especially in subjects allergic to antibiotics. The epitope peptides recognized by two therapeutic mAbs have potential use as tools for structure-function studies. Finally, protective peptides may be used as a target for epitope-based vaccines for control of leptospirosis.Keywords: leptospira lipl32-specific antibodies, therapeutic epitopes, epitopes characterization, immunotherapy
Procedia PDF Downloads 297510 Consumption and Diffusion Based Model of Tissue Organoid Development
Authors: Elena Petersen, Inna Kornienko, Svetlana Guryeva, Sergey Simakov
Abstract:
In vitro organoid cultivation requires the simultaneous provision of necessary vascularization and nutrients perfusion of cells during organoid development. However, many aspects of this problem are still unsolved. The functionality of vascular network intergrowth is limited during early stages of organoid development since a function of the vascular network initiated on final stages of in vitro organoid cultivation. Therefore, a microchannel network should be created in early stages of organoid cultivation in hydrogel matrix aimed to conduct and maintain minimally required the level of nutrients perfusion for all cells in the expanding organoid. The network configuration should be designed properly in order to exclude hypoxic and necrotic zones in expanding organoid at all stages of its cultivation. In vitro vascularization is currently the main issue within the field of tissue engineering. As perfusion and oxygen transport have direct effects on cell viability and differentiation, researchers are currently limited only to tissues of few millimeters in thickness. These limitations are imposed by mass transfer and are defined by the balance between the metabolic demand of the cellular components in the system and the size of the scaffold. Current approaches include growth factor delivery, channeled scaffolds, perfusion bioreactors, microfluidics, cell co-cultures, cell functionalization, modular assembly, and in vivo systems. These approaches may improve cell viability or generate capillary-like structures within a tissue construct. Thus, there is a fundamental disconnect between defining the metabolic needs of tissue through quantitative measurements of oxygen and nutrient diffusion and the potential ease of integration into host vasculature for future in vivo implantation. A model is proposed for growth prognosis of the organoid perfusion based on joint simulations of general nutrient diffusion, nutrient diffusion to the hydrogel matrix through the contact surfaces and microchannels walls, nutrient consumption by the cells of expanding organoid, including biomatrix contraction during tissue development, which is associated with changed consumption rate of growing organoid cells. The model allows computing effective microchannel network design giving minimally required the level of nutrients concentration in all parts of growing organoid. It can be used for preliminary planning of microchannel network design and simulations of nutrients supply rate depending on the stage of organoid development.Keywords: 3D model, consumption model, diffusion, spheroid, tissue organoid
Procedia PDF Downloads 308509 Zn-, Mg- and Ni-Al-NO₃ Layered Double Hydroxides Intercalated by Nitrate Anions for Treatment of Textile Wastewater
Authors: Fatima Zahra Mahjoubi, Abderrahim Khalidi, Mohamed Abdennouri, Omar Cherkaoui, Noureddine Barka
Abstract:
Industrial effluents are one of the major causes of environmental pollution, especially effluents discharged from various dyestuff manufactures, plastic, and paper making industries. These effluents can give rise to certain hazards and environmental problems for their highly colored suspended organic solid. Dye effluents are not only aesthetic pollutants, but coloration of water by the dyes may affect photochemical activities in aquatic systems by reducing light penetration. It has been also reported that several commonly used dyes are carcinogenic and mutagenic for aquatic organisms. Therefore, removing dyes from effluents is of significant importance. Many adsorbent materials have been prepared in the removal of dyes from wastewater, including anionic clay or layered double hydroxyde. The zinc/aluminium (Zn-AlNO₃), magnesium/aluminium (Mg-AlNO₃) and nickel/aluminium (Ni-AlNO₃) layered double hydroxides (LDHs) were successfully synthesized via coprecipitation method. Samples were characterized by XRD, FTIR, TGA/DTA, TEM and pHPZC analysis. XRD patterns showed a basal spacing increase in the order of Zn-AlNO₃ (8.85Å)> Mg-AlNO₃ (7.95Å)> Ni-AlNO₃ (7.82Å). FTIR spectrum confirmed the presence of nitrate anions in the LDHs interlayer. The TEM images indicated that the Zn-AlNO3 presents circular to shaped particles with an average particle size of approximately 30 to 40 nm. Small plates assigned to sheets with hexagonal form were observed in the case of Mg-AlNO₃. Ni-AlNO₃ display nanostructured sphere in diameter between 5 and 10 nm. The LDHs were used as adsorbents for the removal of methyl orange (MO), as a model dye and for the treatment of an effluent generated by a textile factory. Adsorption experiments for MO were carried out as function of solution pH, contact time and initial dye concentration. Maximum adsorption was occurred at acidic solution pH. Kinetic data were tested using pseudo-first-order and pseudo-second-order kinetic models. The best fit was obtained with the pseudo-second-order kinetic model. Equilibrium data were correlated to Langmuir and Freundlich isotherm models. The best conditions for color and COD removal from textile effluent sample were obtained at lower values of pH. Total color removal was obtained with Mg-AlNO₃ and Ni-AlNO₃ LDHs. Reduction of COD to limits authorized by Moroccan standards was obtained with 0.5g/l LDHs dose.Keywords: chemical oxygen demand, color removal, layered double hydroxides, textile wastewater treatment
Procedia PDF Downloads 354508 Biosensor for Determination of Immunoglobulin A, E, G and M
Authors: Umut Kokbas, Mustafa Nisari
Abstract:
Immunoglobulins, also known as antibodies, are glycoprotein molecules produced by activated B cells that transform into plasma cells and result in them. Antibodies are critical molecules of the immune response to fight, which help the immune system specifically recognize and destroy antigens such as bacteria, viruses, and toxins. Immunoglobulin classes differ in their biological properties, structures, targets, functions, and distributions. Five major classes of antibodies have been identified in mammals: IgA, IgD, IgE, IgG, and IgM. Evaluation of the immunoglobulin isotype can provide a useful insight into the complex humoral immune response. Evaluation and knowledge of immunoglobulin structure and classes are also important for the selection and preparation of antibodies for immunoassays and other detection applications. The immunoglobulin test measures the level of certain immunoglobulins in the blood. IgA, IgG, and IgM are usually measured together. In this way, they can provide doctors with important information, especially regarding immune deficiency diseases. Hypogammaglobulinemia (HGG) is one of the main groups of primary immunodeficiency disorders. HGG is caused by various defects in B cell lineage or function that result in low levels of immunoglobulins in the bloodstream. This affects the body's immune response, causing a wide range of clinical features, from asymptomatic diseases to severe and recurrent infections, chronic inflammation and autoimmunity Transient infant hypogammaglobulinemia (THGI), IgM deficiency (IgMD), Bruton agammaglobulinemia, IgA deficiency (SIgAD) HGG samples are a few. Most patients can continue their normal lives by taking prophylactic antibiotics. However, patients with severe infections require intravenous immune serum globulin (IVIG) therapy. The IgE level may rise to fight off parasitic infections, as well as a sign that the body is overreacting to allergens. Also, since the immune response can vary with different antigens, measuring specific antibody levels also aids in the interpretation of the immune response after immunization or vaccination. Immune deficiencies usually occur in childhood. In Immunology and Allergy clinics, apart from the classical methods, it will be more useful in terms of diagnosis and follow-up of diseases, if it is fast, reliable and especially in childhood hypogammaglobulinemia, sampling from children with a method that is more convenient and uncomplicated. The antibodies were attached to the electrode surface via the poly hydroxyethyl methacrylamide cysteine nanopolymer. It was used to evaluate the anodic peak results obtained in the electrochemical study. According to the data obtained, immunoglobulin determination can be made with a biosensor. However, in further studies, it will be useful to develop a medical diagnostic kit with biomedical engineering and to increase its sensitivity.Keywords: biosensor, immunosensor, immunoglobulin, infection
Procedia PDF Downloads 104507 Metformin Protects Cardiac Muscle against the Pro-Apoptotic Effects of Hyperglycaemia, Elevated Fatty Acid and Nicotine
Authors: Christopher R. Triggle, Hong Ding, Khaled Machaca, Gnanapragasam Arunachalam
Abstract:
The antidiabetic drug, metformin, has been in clinical use for over 50 years and remains the first choice drug for the treatment of type two diabetes. In addition to its effectiveness as an oral anti-hyperglycaemic drug metformin also possesses vasculoprotective effects that are assumed to be secondary to its ability to reduce insulin resistance and control glycated hemoglobin levels; however, recent data from our laboratory indicate that metformin also has direct vasoprotective effects that are mediated, at least in part, via the anti-ageing gene, SIRT1. Diabetes is a major risk factor for the development of cardiovascular disease (CVD) and it is also well established that tobacco use further enhances the risk of CVD; however, it is not known whether treatment with metformin can offset the negative effects of diabetes and tobacco use on cardiac function. The current study was therefore designed to investigate 1: the effects of hyperglycaemia (HG) either alone or in the presence of elevated fatty acids (palmitate) and nicotine on the protein expression levels of the deacetylase sirtuin 1 (the protein product of SIRT1), anti-apoptotic Bcl-2, pro-apoptotic BIM and the pro-apoptotic, tumour suppressor protein, acetylated p53 in cardiomyocytes. 2: the ability of metformin to prevent the detrimental effects of HG, palmitate and nicotine on cardiomyocyte survival. Cell culture protocols were designed using a rat cardiomyocyte cell line, H9c2, either under normal glycaemic (NG) conditions of 5.5mM glucose, or hyperglycaemic conditions (HG) of 25mM glucose with, or without, added palmitate (250μM) or nicotine (1.0mM) for 24h. Immuno-blotting was used to detect the expression of sirtuin 1, Bcl-2, BIM, acetylated (Ac)-p53, p53 with β-actin used as the reference protein. Exposure to HG, palmitate, or nicotine alone significantly reduced expression of sirtuin1, Bcl-2 and raised the expression levels of acetylated p53 and BIM; however, the combination of HG, palmitate and nicotine had a synergistic effect to significantly suppress the expression levels of sirtuin 1 and Bcl-2, but further enhanced the expression of Ac-p53, and BIM. The inclusion of 1000μM, but not 50μM, metformin in the H9c2 cell culture protocol prevented the effects of HG, palmitate and nicotine on the pro-apoptotic pathways. Collectively these data indicate that metformin, in addition to its anti-hyperglycaemic and vasculoprotective properties, also has direct cardioprotective actions that offset the negative effects of hyerglycaemia, elevated free fatty acids and nicotine on cardiac cell survival. These data are of particular significance for the treatment of patients with diabetes who are also smokers as the inclusion of metformin in their therapeutic treatment plan should help reduce cardiac-related morbidity and mortality.Keywords: apoptosis, cardiac muscle, diabetes, metformin, nicotine
Procedia PDF Downloads 317506 The Impact of CSR Satisfaction on Employee Commitment
Authors: Silke Bustamante, Andrea Pelzeter, Andreas Deckmann, Rudi Ehlscheidt, Franziska Freudenberger
Abstract:
Many companies increasingly seek to enhance their attractiveness as an employer to bind their employees. At the same time, corporate responsibility for social and ecological issues seems to become a more important part of an attractive employer brand. It enables the company to match the values and expectations of its members, to signal fairness towards them and to increase its brand potential for positive psychological identification on the employees’ side. In the last decade, several empirical studies have focused this relationship, confirming a positive effect of employees’ CSR perception and their affective organizational commitment. The current paper aims to take a slightly different view by analyzing the impact of another factor on commitment: the weighted employee’s satisfaction with the employer CSR. For that purpose, it is assumed that commitment levels are rather a result of the fulfillment or disappointment of expectations. Hence, instead of merely asking how CSR perception affects commitment, a more complex independent variable is taken into account: a weighted satisfaction construct that summarizes two different factors. Therefore, the individual level of commitment contingent on CSR is conceptualized as a function of two psychological processes: (1) the individual significance that an employee ascribes to specific employer attributes and (2) the individual satisfaction based on the fulfillment of expectation that rely on preceding perceptions of employer attributes. The results presented are based on a quantitative survey that was undertaken among employees of the German service sector. Conceptually a five-dimensional CSR construct (ecology, employees, marketplace, society and corporate governance) and a two-dimensional non-CSR construct (company and workplace) were applied to differentiate employer characteristics. (1) Respondents were asked to indicate the importance of different facets of CSR-related and non-CSR-related employer attributes. By means of a conjoint analysis, the relative importance of each employer attribute was calculated from the data. (2) In addition to this, participants stated their level of satisfaction with specific employer attributes. Both indications were merged to individually weighted satisfaction indexes on the seven-dimensional levels of employer characteristics. The affective organizational commitment of employees (dependent variable) was gathered by applying the established 15-items Organizational Commitment Questionnaire (OCQ). The findings related to the relationship between satisfaction and commitment will be presented. Furthermore, the question will be addressed, how important satisfaction with CSR is in relation to the satisfaction with other attributes of the company in the creation of commitment. Practical as well as scientific implications will be discussed especially with reference to previous results that focused on CSR perception as a commitment driver.Keywords: corporate social responsibility, organizational commitment, employee attitudes/satisfaction, employee expectations, employer brand
Procedia PDF Downloads 267505 Prospects and Challenges of Sports Culture in India: A Case Study of Gujarat
Authors: Jay Raval
Abstract:
Sports and physical fitness have been a vital component of our civilization. It is such a power which, motivates and inspires every individual, communities and even countries to be aware of the physical and mental health. All though, sports play vital role in the overall development of the nation, but in the developing countries such as India, this culture of sports is yet to be motivated. However, in India lack of sporting culture has held back the growth of a similar industry in the past, despite the growing awareness and interest in various different sports besides cricket. Hence, due to a lack of sporting culture, corporate investments in India’s sports have traditionally been limited to only non-profit corporate social responsibility activities and initiatives. From past couple of years, India has come up with new initiatives such as Indian Premier League (Cricket), Hockey India League, Indian Badminton League, Pro Kabaddi League, and Indian Super League (Football) which help to boost Indian sports culture and thereby increase economy of the country. Out of 29 states of India, among all of those competitive states, Gujarat is showing very rapid increase in sports participation. Khel Mahakumbh, the competition conducted for the last six years has been a giant step in this direction and covers rural and urban areas of Gujarat. The objective of the research is to address the overall development of the sports system. Sports system includes infrastructure, coaches, resources, and participants. The current existing system is not disabled friendly. This research paper highlights adequate steps in order to improve and sort out pressing issues in the sports system. Education system is highly academic-centric with a definite trend towards reducing school sports and extra-curricular sports in the Gujarat state. Constituents of this research work make an attempt to evaluate the framework of the Olympic Charter, the Sports Authority of India, the Indian Olympics Association and the National Sports Federations. It explores the areas that need to be revamped, rejuvenated and reoriented to function in an open, democratic, equitable, transparent and accountable manner. Research is based on mixed method approach. It is used for the data collection which includes the personal interviews, document analysis and the use of news article. Quality assurance is also tested by conducting the trustworthiness of the paper. Mixed method helps to strengthen the analysis part and give strong base for the discussion during the analysis.Keywords: physical development, sports authority of India, sports policy, women empowerment
Procedia PDF Downloads 141504 Analyzing the Relationship between Physical Fitness and Academic Achievement in Chinese High School Students
Authors: Juan Li, Hui Tian, Min Wang
Abstract:
In China, under the considerable pressure of 'Gaokao' –the highly competitive college entrance examination, high school teachers and parents often worry that doing physical activity would take away the students’ precious study time and may have a negative impact on the academic grades. There was a tendency to achieve high academic scores at the cost of physical exercise. Therefore, the purpose of this study was to examine the relationship between the physical fitness and academic achievement of Chinese high school students. The participants were 968 grade one (N=457) and grade two students (N=511) with an average age of 16 years from three high schools of different levels in Beijing, China. 479 were boys, and 489 were girls. One of the schools is a top high school in China, another is a key high school in Beijing, and the other is an ordinary high school. All analyses were weighted using SAS 9.4 to ensure the representatives of the sample. The weights were based on 12 strata of schools, sex, and grades. Physical fitness data were collected using the scores of the National Physical Fitness Test, which is an annual official test administered by the Ministry of Education in China. It includes 50m run, sits and reach test, standing long jump, 1000m run (for boys), 800m run (for girls), pull-ups for 1 minute (for boys), and bent-knee sit-ups for 1 minute (for girls). The test is an overall evaluation of the students’ physical health on the major indexes of strength, endurance, flexibility, and cardiorespiratory function. Academic scores were obtained from the three schools with the students’ consent. The statistical analysis was conducted with SPSS 24. Independent-Samples T-test was used to examine the gender group differences. Spearman’s Rho bivariate correlation was adopted to test for associations between physical test results and academic performance. Statistical significance was set at p<.05. The study found that girls obtained higher fitness scores than boys (p=.000). The girls’ physical fitness test scores were positively associated with the total academic grades (rs=.103, p=.029), English (rs=.096, p=.042), physics (rs=.202, p=.000) and chemistry scores (rs=.131, p=.009). No significant relationship was observed in boys. Cardiorespiratory fitness had a positive association with physics (rs=.196, p=.000) and biology scores (rs=.168, p=.023) in girls, and with English score in boys (rs=.104, p=.029). A possible explanation for the greater association between physical fitness and academic achievement in girls rather than boys was that girls showed stronger motivation in achieving high scores in whether academic tests or fitness tests. More driven by the test results, girls probably tended to invest more time and energy in training for the fitness test. Higher fitness levels were associated with an academic benefit among girls generally in Chinese high schools. Therefore, physical fitness needs to be given greater emphasis among Chinese adolescents and gender differences need to be taken into consideration.Keywords: physical fitness; adolescents; academic achievement; high school
Procedia PDF Downloads 132503 The Significance of Urban Space in Death Trilogy of Alejandro González Iñárritu
Authors: Marta Kaprzyk
Abstract:
The cinema of Alejandro González Iñárritu hasn’t been subjected to a lot of detailed analysis yet, what makes it an exceptionally interesting research material. The purpose of this presentation is to discuss the significance of urban space in three films of this Mexican director, that forms Death Trilogy: ‘Amores Perros’ (2000), ‘21 Grams’ (2003) and ‘Babel’ (2006). The fact that in the aforementioned movies the urban space itself becomes an additional protagonist with its own identity, psychology and the ability to transform and affect other characters, in itself warrants for independent research and analysis. Independently, such mode of presenting urban space has another function; it enables the director to complement the rest of characters. The basis for methodology of this description of cinematographic space is to treat its visual layer as a point of departure for a detailed analysis. At the same time, the analysis itself will be supported by recognised academic theories concerning special issues, which are transformed here into essential tools necessary to describe the world (mise-en-scène) created by González Iñárritu. In ‘Amores perros’ the Mexico City serves as a scenery – a place full of contradictions- in the movie depicted as a modern conglomerate and an urban jungle, as well as a labyrinth of poverty and violence. In this work stylistic tropes can be found in an intertextual dialogue of the director with photographies of Nan Goldin and Mary Ellen Mark. The story recounted in ‘21 Grams’, the most tragic piece in the trilogy, is characterised by almost hyperrealistic sadism. It takes place in Memphis, which on the screen turns into an impersonal formation full of heterotopias described by Michel Foucault and non-places, as defined by Marc Augé in his essay. By contrast, the main urban space in ‘Babel’ is Tokio, which seems to perfectly correspond with the image of places discussed by Juhani Pallasmaa in his works concerning the reception of the architecture by ‘pathological senses’ in the modern (or, even more adequately, postmodern) world. It’s portrayed as a city full of buildings that look so surreal, that they seem to be completely unsuitable for the humans to move between them. Ultimately, the aim of this paper is to demonstrate the coherence of the manner in which González Iñárritu designs urban spaces in his Death Trilogy. In particular, the author attempts to examine the imperative role of the cities that form three specific microcosms in which the protagonists of the Mexican director live their overwhelming tragedies.Keywords: cinematographic space, Death Trilogy, film Studies, González Iñárritu Alejandro, urban space
Procedia PDF Downloads 333502 Ultra-Tightly Coupled GNSS/INS Based on High Degree Cubature Kalman Filtering
Authors: Hamza Benzerrouk, Alexander Nebylov
Abstract:
In classical GNSS/INS integration designs, the loosely coupled approach uses the GNSS derived position and the velocity as the measurements vector. This design is suboptimal from the standpoint of preventing GNSSoutliers/outages. The tightly coupled GPS/INS navigation filter mixes the GNSS pseudo range and inertial measurements and obtains the vehicle navigation state as the final navigation solution. The ultra‐tightly coupled GNSS/INS design combines the I (inphase) and Q(quadrature) accumulator outputs in the GNSS receiver signal tracking loops and the INS navigation filter function intoa single Kalman filter variant (EKF, UKF, SPKF, CKF and HCKF). As mentioned, EKF and UKF are the most used nonlinear filters in the literature and are well adapted to inertial navigation state estimation when integrated with GNSS signal outputs. In this paper, it is proposed to move a step forward with more accurate filters and modern approaches called Cubature and High Degree cubature Kalman Filtering methods, on the basis of previous results solving the state estimation based on INS/GNSS integration, Cubature Kalman Filter (CKF) and High Degree Cubature Kalman Filter with (HCKF) are the references for the recent developed generalized Cubature rule based Kalman Filter (GCKF). High degree cubature rules are the kernel of the new solution for more accurate estimation with less computational complexity compared with the Gauss-Hermite Quadrature (GHQKF). Gauss-Hermite Kalman Filter GHKF which is not selected in this work because of its limited real-time implementation in high-dimensional state-spaces. In ultra tightly or a deeply coupled GNSS/INS system is dynamics EKF is used with transition matrix factorization together with GNSS block processing which is well described in the paper and assumes available the intermediary frequency IF by using a correlator samples with a rate of 500 Hz in the presented approach. GNSS (GPS+GLONASS) measurements are assumed available and modern SPKF with Cubature Kalman Filter (CKF) are compared with new versions of CKF called high order CKF based on Spherical-radial cubature rules developed at the fifth order in this work. Estimation accuracy of the high degree CKF is supposed to be comparative to GHKF, results of state estimation are then observed and discussed for different initialization parameters. Results show more accurate navigation state estimation and more robust GNSS receiver when Ultra Tightly Coupled approach applied based on High Degree Cubature Kalman Filter.Keywords: GNSS, INS, Kalman filtering, ultra tight integration
Procedia PDF Downloads 280501 Data-Driven Surrogate Models for Damage Prediction of Steel Liquid Storage Tanks under Seismic Hazard
Authors: Laura Micheli, Majd Hijazi, Mahmoud Faytarouni
Abstract:
The damage reported by oil and gas industrial facilities revealed the utmost vulnerability of steel liquid storage tanks to seismic events. The failure of steel storage tanks may yield devastating and long-lasting consequences on built and natural environments, including the release of hazardous substances, uncontrolled fires, and soil contamination with hazardous materials. It is, therefore, fundamental to reliably predict the damage that steel liquid storage tanks will likely experience under future seismic hazard events. The seismic performance of steel liquid storage tanks is usually assessed using vulnerability curves obtained from the numerical simulation of a tank under different hazard scenarios. However, the computational demand of high-fidelity numerical simulation models, such as finite element models, makes the vulnerability assessment of liquid storage tanks time-consuming and often impractical. As a solution, this paper presents a surrogate model-based strategy for predicting seismic-induced damage in steel liquid storage tanks. In the proposed strategy, the surrogate model is leveraged to reduce the computational demand of time-consuming numerical simulations. To create the data set for training the surrogate model, field damage data from past earthquakes reconnaissance surveys and reports are collected. Features representative of steel liquid storage tank characteristics (e.g., diameter, height, liquid level, yielding stress) and seismic excitation parameters (e.g., peak ground acceleration, magnitude) are extracted from the field damage data. The collected data are then utilized to train a surrogate model that maps the relationship between tank characteristics, seismic hazard parameters, and seismic-induced damage via a data-driven surrogate model. Different types of surrogate algorithms, including naïve Bayes, k-nearest neighbors, decision tree, and random forest, are investigated, and results in terms of accuracy are reported. The model that yields the most accurate predictions is employed to predict future damage as a function of tank characteristics and seismic hazard intensity level. Results show that the proposed approach can be used to estimate the extent of damage in steel liquid storage tanks, where the use of data-driven surrogates represents a viable alternative to computationally expensive numerical simulation models.Keywords: damage prediction , data-driven model, seismic performance, steel liquid storage tanks, surrogate model
Procedia PDF Downloads 143500 Women Writing Group as a Mean for Personal and Social Change
Authors: Michal Almagor, Rivka Tuval-Mashiach
Abstract:
This presentation will explore the main processes identified in women writing group, as an interdisciplinary field with personal and social effects. It is based on the initial findings of a Ph.D. research focus on the intersection of group processes with the element of writing, in the context of gender. Writing as a therapeutic mean has been recognized and found to be highly effective. Additionally, a substantial amount of research reveals the psychological impact of group processes. However, the combination of writing and groups as a therapeutic tool was hardly investigated; this is the contribution of this research. In the following qualitative-phenomenological study, the experiences of eight women participating in a 10-sessions structured writing group were investigated. We used the meetings transcripts, semi-structured interviews, and the texts to analyze and understand the experience of participating in the group. The two significant findings revealed were spiral intersubjectivity and archaic level of semiotic language. We realized that the content and the process are interwoven; participants are writing, reading and discussing their texts in a group setting that enhanced self-dialogue between the participants and their own narratives and texts, as well as dialogue with others. This process includes working through otherness within and between while discovering and creating a multiplicity of narratives. A movement of increasing shared circles from the personal to the group and to the social-cultural environment was identified, forming what we termed as spiral intersubjectivity. An additional layer of findings was revealed while we listened to the resonance of the group-texts, and discourse; during this process, we could trace the semiotic level in addition to the symbolic one. We were witness to the dominant presence of the body, and primal sensuality, expressed by rhythm, sound and movements, signs of pre-verbal language. Those findings led us to a new understanding of the semiotic function as a way to express the fullness of women experience and the enabling role of writing in reviving what was repressed. The poetic language serves as a bridge between the symbolic and the semiotic. Re-reading the group materials, exposed another layer of expression, an old-new language. This approach suggests a feminine expression of subjective experience with personal and social importance. It is a subversive move, encouraging women to write themselves, as a craft that every woman can use, giving voice to the silent and hidden, and experiencing the power of performing 'my story'. We suggest that women writing group is an efficient, powerful yet welcoming way to raise the awareness of researchers and clinicians, and more importantly of the participants, to the uniqueness of the feminine experience, and to gender-sensitive curative approaches.Keywords: group, intersubjectivity, semiotic, writing
Procedia PDF Downloads 219499 Application of Principal Component Analysis and Ordered Logit Model in Diabetic Kidney Disease Progression in People with Type 2 Diabetes
Authors: Mequanent Wale Mekonen, Edoardo Otranto, Angela Alibrandi
Abstract:
Diabetic kidney disease is one of the main microvascular complications caused by diabetes. Several clinical and biochemical variables are reported to be associated with diabetic kidney disease in people with type 2 diabetes. However, their interrelations could distort the effect estimation of these variables for the disease's progression. The objective of the study is to determine how the biochemical and clinical variables in people with type 2 diabetes are interrelated with each other and their effects on kidney disease progression through advanced statistical methods. First, principal component analysis was used to explore how the biochemical and clinical variables intercorrelate with each other, which helped us reduce a set of correlated biochemical variables to a smaller number of uncorrelated variables. Then, ordered logit regression models (cumulative, stage, and adjacent) were employed to assess the effect of biochemical and clinical variables on the order-level response variable (progression of kidney function) by considering the proportionality assumption for more robust effect estimation. This retrospective cross-sectional study retrieved data from a type 2 diabetic cohort in a polyclinic hospital at the University of Messina, Italy. The principal component analysis yielded three uncorrelated components. These are principal component 1, with negative loading of glycosylated haemoglobin, glycemia, and creatinine; principal component 2, with negative loading of total cholesterol and low-density lipoprotein; and principal component 3, with negative loading of high-density lipoprotein and a positive load of triglycerides. The ordered logit models (cumulative, stage, and adjacent) showed that the first component (glycosylated haemoglobin, glycemia, and creatinine) had a significant effect on the progression of kidney disease. For instance, the cumulative odds model indicated that the first principal component (linear combination of glycosylated haemoglobin, glycemia, and creatinine) had a strong and significant effect on the progression of kidney disease, with an effect or odds ratio of 0.423 (P value = 0.000). However, this effect was inconsistent across levels of kidney disease because the first principal component did not meet the proportionality assumption. To address the proportionality problem and provide robust effect estimates, alternative ordered logit models, such as the partial cumulative odds model, the partial adjacent category model, and the partial continuation ratio model, were used. These models suggested that clinical variables such as age, sex, body mass index, medication (metformin), and biochemical variables such as glycosylated haemoglobin, glycemia, and creatinine have a significant effect on the progression of kidney disease.Keywords: diabetic kidney disease, ordered logit model, principal component analysis, type 2 diabetes
Procedia PDF Downloads 39498 Epigenetic and Archeology: A Quest to Re-Read Humanity
Authors: Salma A. Mahmoud
Abstract:
Epigenetic, or alteration in gene expression influenced by extragenetic factors, has emerged as one of the most promising areas that will address some of the gaps in our current knowledge in understanding patterns of human variation. In the last decade, the research investigating epigenetic mechanisms in many fields has flourished and witnessed significant progress. It paved the way for a new era of integrated research especially between anthropology/archeology and life sciences. Skeletal remains are considered the most significant source of information for studying human variations across history, and by utilizing these valuable remains, we can interpret the past events, cultures and populations. In addition to archeological, historical and anthropological importance, studying bones has great implications in other fields such as medicine and science. Bones also can hold within them the secrets of the future as they can act as predictive tools for health, society characteristics and dietary requirements. Bones in their basic forms are composed of cells (osteocytes) that are affected by both genetic and environmental factors, which can only explain a small part of their variability. The primary objective of this project is to examine the epigenetic landscape/signature within bones of archeological remains as a novel marker that could reveal new ways to conceptualize chronological events, gender differences, social status and ecological variations. We attempted here to address discrepancies in common variants such as methylome as well as novel epigenetic regulators such as chromatin remodelers, which to our best knowledge have not yet been investigated by anthropologists/ paleoepigenetists using plethora of techniques (biological, computational, and statistical). Moreover, extracting epigenetic information from bones will highlight the importance of osseous material as a vector to study human beings in several contexts (social, cultural and environmental), and strengthen their essential role as model systems that can be used to investigate and construct various cultural, political and economic events. We also address all steps required to plan and conduct an epigenetic analysis from bone materials (modern and ancient) as well as discussing the key challenges facing researchers aiming to investigate this field. In conclusion, this project will serve as a primer for bioarcheologists/anthropologists and human biologists interested in incorporating epigenetic data into their research programs. Understanding the roles of epigenetic mechanisms in bone structure and function will be very helpful for a better comprehension of their biology and highlighting their essentiality as interdisciplinary vectors and a key material in archeological research.Keywords: epigenetics, archeology, bones, chromatin, methylome
Procedia PDF Downloads 108497 Identification of Natural Liver X Receptor Agonists as the Treatments or Supplements for the Management of Alzheimer and Metabolic Diseases
Authors: Hsiang-Ru Lin
Abstract:
Cholesterol plays an essential role in the regulation of the progression of numerous important diseases including atherosclerosis and Alzheimer disease so the generation of suitable cholesterol-lowering reagents is urgent to develop. Liver X receptor (LXR) is a ligand-activated transcription factor whose natural ligands are cholesterols, oxysterols and glucose. Once being activated, LXR can transactivate the transcription action of various genes including CYP7A1, ABCA1, and SREBP1c, involved in the lipid metabolism, glucose metabolism and inflammatory pathway. Essentially, the upregulation of ABCA1 facilitates cholesterol efflux from the cells and attenuates the production of beta-amyloid (ABeta) 42 in brain so LXR is a promising target to develop the cholesterol-lowering reagents and preventative treatment of Alzheimer disease. Engelhardia roxburghiana is a deciduous tree growing in India, China, and Taiwan. However, its chemical composition is only reported to exhibit antitubercular and anti-inflammatory effects. In this study, four compounds, engelheptanoxides A, C, engelhardiol A, and B isolated from the root of Engelhardia roxburghiana were evaluated for their agonistic activity against LXR by the transient transfection reporter assays in the HepG2 cells. Furthermore, their interactive modes with LXR ligand binding pocket were generated by molecular modeling programs. By using the cell-based biological assays, engelheptanoxides A, C, engelhardiol A, and B showing no cytotoxic effect against the proliferation of HepG2 cells, exerted obvious LXR agonistic effects with similar activity as T0901317, a novel synthetic LXR agonist. Further modeling studies including docking and SAR (structure-activity relationship) showed that these compounds can locate in LXR ligand binding pocket in the similar manner as T0901317. Thus, LXR is one of nuclear receptors targeted by pharmaceutical industry for developing treatments of Alzheimer and atherosclerosis diseases. Importantly, the cell-based assays, together with molecular modeling studies suggesting a plausible binding mode, demonstrate that engelheptanoxides A, C, engelhardiol A, and B function as LXR agonists. This is the first report to demonstrate that the extract of Engelhardia roxburghiana contains LXR agonists. As such, these active components of Engelhardia roxburghiana or subsequent analogs may show important therapeutic effects through selective modulation of the LXR pathway.Keywords: Liver X receptor (LXR), Engelhardia roxburghiana, CYP7A1, ABCA1, SREBP1c, HepG2 cells
Procedia PDF Downloads 420496 Adaptation of the Scenario Test for Greek-speaking People with Aphasia: Reliability and Validity Study
Authors: Marina Charalambous, Phivos Phylactou, Thekla Elriz, Loukia Psychogios, Jean-Marie Annoni
Abstract:
Background: Evidence-based practices for the evaluation and treatment of people with aphasia (PWA) in Greek are mainly impairment-based. Functional and multimodal communication is usually under assessed and neglected by clinicians. This study explores the adaptation and psychometric testing of the Greek (GR) version of The Scenario Test. The Scenario Test assesses the everyday functional communication of PWA in an interactive multimodal communication setting with the support of an active communication facilitator. Aims: To define the reliability and validity of The Scenario Test GR and discuss its clinical value. Methods & Procedures: The Scenario Test-GR was administered to 54 people with chronic stroke (6+ months post-stroke): 32 PWA and 22 people with stroke without aphasia. Participants were recruited from Greece and Cyprus. All measures were performed in an interview format. Standard psychometric criteria were applied to evaluate reliability (internal consistency, test-retest, and interrater reliability) and validity (construct and known – groups validity) of the Scenario Test GR. Video analysis was performed for the qualitative examination of the communication modes used. Outcomes & Results: The Scenario Test-GR shows high levels of reliability and validity. High scores of internal consistency (Cronbach’s α = .95), test-retest reliability (ICC = .99), and interrater reliability (ICC = .99) were found. Interrater agreement in scores on individual items fell between good and excellent levels of agreement. Correlations with a tool measuring language function in aphasia (the Aphasia Severity Rating Scale of the Boston Diagnostic Aphasia Examination), a measure of functional communication (the Communicative Effectiveness Index), and two instruments examining the psychosocial impact of aphasia (the Stroke and Aphasia Quality of Life questionnaire and the Aphasia Impact Questionnaire) revealed good convergent validity (all ps< .05). Results showed good known – groups validity (Mann-Whitney U = 96.5, p < .001), with significantly higher scores for participants without aphasia compared to those with aphasia. Conclusions: The psychometric qualities of The Scenario Test-GR support the reliability and validity of the tool for the assessment of functional communication for Greek-speaking PWA. The Scenario Test-GR can be used to assess multimodal functional communication, orient aphasia rehabilitation goal setting towards the activity and participation level, and be used as an outcome measure of everyday communication. Future studies will focus on the measurement of sensitivity to change in PWA with severe non-fluent aphasia.Keywords: the scenario test GR, functional communication assessment, people with aphasia (PWA), tool validation
Procedia PDF Downloads 128495 Analyzing Transit Network Design versus Urban Dispersion
Authors: Hugo Badia
Abstract:
This research answers which is the most suitable transit network structure to serve specific demand requirements in an increasing urban dispersion process. Two main approaches of network design are found in the literature. On the one hand, a traditional answer, widespread in our cities, that develops a high number of lines to connect most of origin-destination pairs by direct trips; an approach based on the idea that users averse to transfers. On the other hand, some authors advocate an alternative design characterized by simple networks where transfer is essential to complete most of trips. To answer which of them is the best option, we use a two-step methodology. First, by means of an analytical model, three basic network structures are compared: a radial scheme, starting point for the other two structures, a direct trip-based network, and a transfer-based one, which represent the two alternative transit network designs. The model optimizes the network configuration with regard to the total cost for each structure. For a scenario of dispersion, the best alternative is the structure with the minimum cost. This dispersion degree is defined in a simple way considering that only a central area attracts all trips. If this area is small, we have a high concentrated mobility pattern; if this area is too large, the city is highly decentralized. In this first step, we can determine the area of applicability for each structure in function to that urban dispersion degree. The analytical results show that a radial structure is suitable when the demand is so centralized, however, when this demand starts to scatter, new transit lines should be implemented to avoid transfers. If the urban dispersion advances, the introduction of more lines is no longer a good alternative, in this case, the best solution is a change of structure, from direct trips to a network based on transfers. The area of applicability of each network strategy is not constant, it depends on the characteristics of demand, city and transport technology. In the second step, we translate analytical results to a real case study by the relationship between the parameters of dispersion of the model and direct measures of dispersion in a real city. Two dimensions of the urban sprawl process are considered: concentration, defined by Gini coefficient, and centralization by area based centralization index. Once it is estimated the real dispersion degree, we are able to identify in which area of applicability the city is located. In summary, from a strategic point of view, we can obtain with this methodology which is the best network design approach for a city, comparing the theoretical results with the real dispersion degree.Keywords: analytical network design model, network structure, public transport, urban dispersion
Procedia PDF Downloads 230494 Automatic Aggregation and Embedding of Microservices for Optimized Deployments
Authors: Pablo Chico De Guzman, Cesar Sanchez
Abstract:
Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.Keywords: aggregation, deployment, embedding, resource allocation
Procedia PDF Downloads 203493 An Evolutionary Approach for QAOA for Max-Cut
Authors: Francesca Schiavello
Abstract:
This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization
Procedia PDF Downloads 60492 Towards Accurate Velocity Profile Models in Turbulent Open-Channel Flows: Improved Eddy Viscosity Formulation
Authors: W. Meron Mebrahtu, R. Absi
Abstract:
Velocity distribution in turbulent open-channel flows is organized in a complex manner. This is due to the large spatial and temporal variability of fluid motion resulting from the free-surface turbulent flow condition. This phenomenon is complicated further due to the complex geometry of channels and the presence of solids transported. Thus, several efforts were made to understand the phenomenon and obtain accurate mathematical models that are suitable for engineering applications. However, predictions are inaccurate because oversimplified assumptions are involved in modeling this complex phenomenon. Therefore, the aim of this work is to study velocity distribution profiles and obtain simple, more accurate, and predictive mathematical models. Particular focus will be made on the acceptable simplification of the general transport equations and an accurate representation of eddy viscosity. Wide rectangular open-channel seems suitable to begin the study; other assumptions are smooth-wall, and sediment-free flow under steady and uniform flow conditions. These assumptions will allow examining the effect of the bottom wall and the free surface only, which is a necessary step before dealing with more complex flow scenarios. For this flow condition, two ordinary differential equations are obtained for velocity profiles; from the Reynolds-averaged Navier-Stokes (RANS) equation and equilibrium consideration between turbulent kinetic energy (TKE) production and dissipation. Then different analytic models for eddy viscosity, TKE, and mixing length were assessed. Computation results for velocity profiles were compared to experimental data for different flow conditions and the well-known linear, log, and log-wake laws. Results show that the model based on the RANS equation provides more accurate velocity profiles. In the viscous sublayer and buffer layer, the method based on Prandtl’s eddy viscosity model and Van Driest mixing length give a more precise result. For the log layer and outer region, a mixing length equation derived from Von Karman’s similarity hypothesis provides the best agreement with measured data except near the free surface where an additional correction based on a damping function for eddy viscosity is used. This method allows more accurate velocity profiles with the same value of the damping coefficient that is valid under different flow conditions. This work continues with investigating narrow channels, complex geometries, and the effect of solids transported in sewers.Keywords: accuracy, eddy viscosity, sewers, velocity profile
Procedia PDF Downloads 112491 The Effects of Computer Game-Based Pedagogy on Graduate Students Statistics Performance
Authors: Eva Laryea, Clement Yeboah Authors
Abstract:
A pretest-posttest within subjects, experimental design was employed to examine the effects of a computerized basic statistics learning game on achievement and statistics-related anxiety of students enrolled in introductory graduate statistics course. Participants (N = 34) were graduate students in a variety of programs at state-funded research university in the Southeast United States. We analyzed pre-test posttest differences using paired samples t-tests for achievement and for statistics anxiety. The results of the t-test for knowledge in statistics were found to be statistically significant indicating significant mean gains for statistical knowledge as a function of the game-based intervention. Likewise, the results of the t-test for statistics-related anxiety were also statistically significant indicating a decrease in anxiety from pretest to posttest. The implications of the present study are significant for both teachers and students. For teachers, using computer games developed by the researchers can help to create a more dynamic and engaging classroom environment, as well as improve student learning outcomes. For students, playing these educational games can help to develop important skills such as problem solving, critical thinking, and collaboration. Students can develop interest in the subject matter and spend quality time to learn the course as they play the game without knowing that they are even learning the presupposed hard course. The future directions of the present study are promising, as technology continues to advance and become more widely available. Some potential future developments include the integration of virtual and augmented reality into educational games, the use of machine learning and artificial intelligence to create personalized learning experiences, and the development of new and innovative game-based assessment tools. It is also important to consider the ethical implications of computer game-based pedagogy, such as the potential for games to perpetuate harmful stereotypes and biases. As the field continues to evolve, it will be crucial to address these issues and work towards creating inclusive and equitable learning experiences for all students. This study has the potential to revolutionize the way basic statistics graduate students learn and offers exciting opportunities for future development and research. It is an important area of inquiry for educators, researchers, and policymakers, and will continue to be a dynamic and rapidly evolving field for years to come.Keywords: pretest-posttest within subjects, experimental design, achievement, statistics-related anxiety
Procedia PDF Downloads 58