Search results for: functional linear regression
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8460

Search results for: functional linear regression

390 Single Cell and Spatial Transcriptomics: A Beginners Viewpoint from the Conceptual Pipeline

Authors: Leo Nnamdi Ozurumba-Dwight

Abstract:

Messenger ribooxynucleic acid (mRNA) molecules are compositional, protein-based. These proteins, encoding mRNA molecules (which collectively connote the transcriptome), when analyzed by RNA sequencing (RNAseq), unveils the nature of gene expression in the RNA. The obtained gene expression provides clues of cellular traits and their dynamics in presentations. These can be studied in relation to function and responses. RNAseq is a practical concept in Genomics as it enables detection and quantitative analysis of mRNA molecules. Single cell and spatial transcriptomics both present varying avenues for expositions in genomic characteristics of single cells and pooled cells in disease conditions such as cancer, auto-immune diseases, hematopoietic based diseases, among others, from investigated biological tissue samples. Single cell transcriptomics helps conduct a direct assessment of each building unit of tissues (the cell) during diagnosis and molecular gene expressional studies. A typical technique to achieve this is through the use of a single-cell RNA sequencer (scRNAseq), which helps in conducting high throughput genomic expressional studies. However, this technique generates expressional gene data for several cells which lack presentations on the cells’ positional coordinates within the tissue. As science is developmental, the use of complimentary pre-established tissue reference maps using molecular and bioinformatics techniques has innovatively sprung-forth and is now used to resolve this set back to produce both levels of data in one shot of scRNAseq analysis. This is an emerging conceptual approach in methodology for integrative and progressively dependable transcriptomics analysis. This can support in-situ fashioned analysis for better understanding of tissue functional organization, unveil new biomarkers for early-stage detection of diseases, biomarkers for therapeutic targets in drug development, and exposit nature of cell-to-cell interactions. Also, these are vital genomic signatures and characterizations of clinical applications. Over the past decades, RNAseq has generated a wide array of information that is igniting bespoke breakthroughs and innovations in Biomedicine. On the other side, spatial transcriptomics is tissue level based and utilized to study biological specimens having heterogeneous features. It exposits the gross identity of investigated mammalian tissues, which can then be used to study cell differentiation, track cell line trajectory patterns and behavior, and regulatory homeostasis in disease states. Also, it requires referenced positional analysis to make up of genomic signatures that will be sassed from the single cells in the tissue sample. Given these two presented approaches to RNA transcriptomics study in varying quantities of cell lines, with avenues for appropriate resolutions, both approaches have made the study of gene expression from mRNA molecules interesting, progressive, developmental, and helping to tackle health challenges head-on.

Keywords: transcriptomics, RNA sequencing, single cell, spatial, gene expression.

Procedia PDF Downloads 113
389 Li-Ion Batteries vs. Synthetic Natural Gas: A Life Cycle Analysis Study on Sustainable Mobility

Authors: Guido Lorenzi, Massimo Santarelli, Carlos Augusto Santos Silva

Abstract:

The growth of non-dispatchable renewable energy sources in the European electricity generation mix is promoting the research of technically feasible and cost-effective solutions to make use of the excess energy, produced when the demand is low. The increasing intermittent renewable capacity is becoming a challenge to face especially in Europe, where some countries have shares of wind and solar on the total electricity produced in 2015 higher than 20%, with Denmark around 40%. However, other consumption sectors (mainly transportation) are still considerably relying on fossil fuels, with a slow transition to other forms of energy. Among the opportunities for different mobility concepts, electric (EV) and biofuel-powered vehicles (BPV) are the options that currently appear more promising. The EVs are targeting mainly the light duty users because of their zero (Full electric) or reduced (Hybrid) local emissions, while the BPVs encourage the use of alternative resources with the same technologies (thermal engines) used so far. The batteries which are applied to EVs are based on ions of Lithium because of their overall good performance in energy density, safety, cost and temperature performance. Biofuels, instead, can be various and the major difference is in their physical state (liquid or gaseous). In this study gaseous biofuels are considered and, more specifically, Synthetic Natural Gas (SNG) produced through a process of Power-to-Gas consisting in an electrochemical upgrade (with Solid Oxide Electrolyzers) of biogas with CO2 recycling. The latter process combines a first stage of electrolysis, where syngas is produced, and a second stage of methanation in which the product gas is turned into methane and then made available for consumption. A techno-economic comparison between the two alternatives is possible, but it does not capture all the different aspects involved in the two routes for the promotion of a more sustainable mobility. For this reason, a more comprehensive methodology, i.e. Life Cycle Assessment, is adopted to describe the environmental implications of using excess electricity (directly or indirectly) for new vehicle fleets. The functional unit of the study is 1 km and the two options are compared in terms of overall CO2 emissions, both considering Cradle to Gate and Cradle to Grave boundaries. Showing how production and disposal of materials affect the environmental performance of the analyzed routes is useful to broaden the perspective on the impacts that different technologies produce, in addition to what is emitted during the operational life. In particular, this applies to batteries for which the decommissioning phase has a larger impact on the environmental balance compared to electrolyzers. The lower (more than one order of magnitude) energy density of Li-ion batteries compared to SNG implies that for the same amount of energy used, more material resources are needed to obtain the same effect. The comparison is performed in an energy system that simulates the Western European one, in order to assess which of the two solutions is more suitable to lead the de-fossilization of the transport sector with the least resource depletion and the mildest consequences for the ecosystem.

Keywords: electrical energy storage, electric vehicles, power-to-gas, life cycle assessment

Procedia PDF Downloads 168
388 Reliability and Availability Analysis of Satellite Data Reception System using Reliability Modeling

Authors: Ch. Sridevi, S. P. Shailender Kumar, B. Gurudayal, A. Chalapathi Rao, K. Koteswara Rao, P. Srinivasulu

Abstract:

System reliability and system availability evaluation plays a crucial role in ensuring the seamless operation of complex satellite data reception system with consistent performance for longer periods. This paper presents a novel approach for the same using a case study on one of the antenna systems at satellite data reception ground station in India. The methodology involves analyzing system's components, their failure rates, system's architecture, generation of logical reliability block diagram model and estimating the reliability of the system using the component level mean time between failures considering exponential distribution to derive a baseline estimate of the system's reliability. The model is then validated with collected system level field failure data from the operational satellite data reception systems that includes failure occurred, failure time, criticality of the failure and repair times by using statistical techniques like median rank, regression and Weibull analysis to extract meaningful insights regarding failure patterns and practical reliability of the system and to assess the accuracy of the developed reliability model. The study mainly focused on identification of critical units within the system, which are prone to failures and have a significant impact on overall performance and brought out a reliability model of the identified critical unit. This model takes into account the interdependencies among system components and their impact on overall system reliability and provides valuable insights into the performance of the system to understand the Improvement or degradation of the system over a period of time and will be the vital input to arrive at the optimized design for future development. It also provides a plug and play framework to understand the effect on performance of the system in case of any up gradations or new designs of the unit. It helps in effective planning and formulating contingency plans to address potential system failures, ensuring the continuity of operations. Furthermore, to instill confidence in system users, the duration for which the system can operate continuously with the desired level of 3 sigma reliability was estimated that turned out to be a vital input to maintenance plan. System availability and station availability was also assessed by considering scenarios of clash and non-clash to determine the overall system performance and potential bottlenecks. Overall, this paper establishes a comprehensive methodology for reliability and availability analysis of complex satellite data reception systems. The results derived from this approach facilitate effective planning contingency measures, and provide users with confidence in system performance and enables decision-makers to make informed choices about system maintenance, upgrades and replacements. It also aids in identifying critical units and assessing system availability in various scenarios and helps in minimizing downtime and optimizing resource allocation.

Keywords: exponential distribution, reliability modeling, reliability block diagram, satellite data reception system, system availability, weibull analysis

Procedia PDF Downloads 72
387 The Quantitative SWOT-Analysis of Service Blood Activity of Kazakhstan

Authors: Alua Massalimova

Abstract:

Situation analysis of Blood Service revealed that the strengths dominated over the weak 1.4 times. The possibilities dominate over the threats by 1.1 times. It follows that by using timely the possibility the Service, it is possible to strengthen its strengths and avoid threats. Priority directions of the resulting analysis are the use of subjective factors, such as personal management capacity managers of the Blood Center in the field of possibilities of legal activity of administrative decisions and the mobilization of stable staff in general market conditions. We have studied for the period 2011-2015 retrospectively indicators of Blood Service of Kazakhstan. Strengths of Blood Service of RK(Ps4,5): 1) indicators of donations for 1000 people is higher than in some countries of the CIS (in Russia 14, Kazakhstan - 17); 2) the functioning science centre of transfusiology; 3) the legal possibility of additional financing blood centers in the form of paid services; 4) the absence of competitors; 5) training on specialty Transfusiology; 6) the stable management staff of blood centers, a high level of competence; 7) increase in the incidence requiring transfusion therapy (oncohematology); 8) equipment upgrades; 9) the opening of a reference laboratory; 10) growth of the proportion of issued high-quality blood components; 11) governmental organization 'Drop of Life'; 12) the functioning bone marrow register; 13) equipped with modern equipment HLA-laboratory; 14) High categorization of average medical workers; 15) availability of own specialized scientific journal; 16) vivarium. The weaknesses (Ps = 3.5): 1) the incomplete equipping of blood centers and blood transfusion cabinets according to standards; 2) low specific weight of paid services of the CC; 3) low categorization of doctors; 4) high staff turnover; 5) the low scientific potential of industrial and clinical of transfusiology; 6) the low wages paid; 7) slight growth of harvested donor blood; 8) the weak continuity with offices blood transfusion; 9) lack of agitation work; 10) the formally functioning of Transfusion Association; 11) the absence of scientific laboratories; 12) high standard deviation from the average for donations in the republic. The possibilities (Ps = 2,7): 1): international grants; 2) organization of international seminars on clinical of transfusiology; 3) cross-sectoral cooperation; 4) to increase scientific research in the field of clinical of transfusiology; 5) reduce the share of donation unsuitable for transfusion and processing; 6) strengthening marketing management in the development of fee-based services; 7) advertising paid services; 8) strengthening the publishing of teaching aids; 9) team-building staff. The threats (Ps = 2.1): 1) an increase of staff turnover; 2) the risk of litigation; 3) reduction gemoprodukts based on evidence-based medicine; 4) regression of scientific capacity; 5) organization of marketing; 6) transfusiologist marketing; 7) reduction in the quality of the evidence base transfusions.

Keywords: blood service, healthcare, Kazakhstan, quantative swot analysis

Procedia PDF Downloads 218
386 The Use of Geographic Information System Technologies for Geotechnical Monitoring of Pipeline Systems

Authors: A. G. Akhundov

Abstract:

Issues of obtaining unbiased data on the status of pipeline systems of oil- and oil product transportation become especially important when laying and operating pipelines under severe nature and climatic conditions. The essential attention is paid here to researching exogenous processes and their impact on linear facilities of the pipeline system. Reliable operation of pipelines under severe nature and climatic conditions, timely planning and implementation of compensating measures are only possible if operation conditions of pipeline systems are regularly monitored, and changes of permafrost soil and hydrological operation conditions are accounted for. One of the main reasons for emergency situations to appear is the geodynamic factor. Emergency situations are proved by the experience to occur within areas characterized by certain conditions of the environment and to develop according to similar scenarios depending on active processes. The analysis of natural and technical systems of main pipelines at different stages of monitoring gives a possibility of making a forecast of the change dynamics. The integration of GIS technologies, traditional means of geotechnical monitoring (in-line inspection, geodetic methods, field observations), and remote methods (aero-visual inspection, aero photo shooting, air and ground laser scanning) provides the most efficient solution of the problem. The united environment of geo information system (GIS) is a comfortable way to implement the monitoring system on the main pipelines since it provides means to describe a complex natural and technical system and every element thereof with any set of parameters. Such GIS enables a comfortable simulation of main pipelines (both in 2D and 3D), the analysis of situations and selection of recommendations to prevent negative natural or man-made processes and to mitigate their consequences. The specifics of such systems include: a multi-dimensions simulation of facilities in the pipeline system, math modelling of the processes to be observed, and the use of efficient numeric algorithms and software packets for forecasting and analyzing. We see one of the most interesting possibilities of using the monitoring results as generating of up-to-date 3D models of a facility and the surrounding area on the basis of aero laser scanning, data of aerophotoshooting, and data of in-line inspection and instrument measurements. The resulting 3D model shall be the basis of the information system providing means to store and process data of geotechnical observations with references to the facilities of the main pipeline; to plan compensating measures, and to control their implementation. The use of GISs for geotechnical monitoring of pipeline systems is aimed at improving the reliability of their operation, reducing the probability of negative events (accidents and disasters), and at mitigation of consequences thereof if they still are to occur.

Keywords: databases, 3D GIS, geotechnical monitoring, pipelines, laser scaning

Procedia PDF Downloads 183
385 Evaluation of Gesture-Based Password: User Behavioral Features Using Machine Learning Algorithms

Authors: Lakshmidevi Sreeramareddy, Komalpreet Kaur, Nane Pothier

Abstract:

Graphical-based passwords have existed for decades. Their major advantage is that they are easier to remember than an alphanumeric password. However, their disadvantage (especially recognition-based passwords) is the smaller password space, making them more vulnerable to brute force attacks. Graphical passwords are also highly susceptible to the shoulder-surfing effect. The gesture-based password method that we developed is a grid-free, template-free method. In this study, we evaluated the gesture-based passwords for usability and vulnerability. The results of the study are significant. We developed a gesture-based password application for data collection. Two modes of data collection were used: Creation mode and Replication mode. In creation mode (Session 1), users were asked to create six different passwords and reenter each password five times. In replication mode, users saw a password image created by some other user for a fixed duration of time. Three different duration timers, such as 5 seconds (Session 2), 10 seconds (Session 3), and 15 seconds (Session 4), were used to mimic the shoulder-surfing attack. After the timer expired, the password image was removed, and users were asked to replicate the password. There were 74, 57, 50, and 44 users participated in Session 1, Session 2, Session 3, and Session 4 respectfully. In this study, the machine learning algorithms have been applied to determine whether the person is a genuine user or an imposter based on the password entered. Five different machine learning algorithms were deployed to compare the performance in user authentication: namely, Decision Trees, Linear Discriminant Analysis, Naive Bayes Classifier, Support Vector Machines (SVMs) with Gaussian Radial Basis Kernel function, and K-Nearest Neighbor. Gesture-based password features vary from one entry to the next. It is difficult to distinguish between a creator and an intruder for authentication. For each password entered by the user, four features were extracted: password score, password length, password speed, and password size. All four features were normalized before being fed to a classifier. Three different classifiers were trained using data from all four sessions. Classifiers A, B, and C were trained and tested using data from the password creation session and the password replication with a timer of 5 seconds, 10 seconds, and 15 seconds, respectively. The classification accuracies for Classifier A using five ML algorithms are 72.5%, 71.3%, 71.9%, 74.4%, and 72.9%, respectively. The classification accuracies for Classifier B using five ML algorithms are 69.7%, 67.9%, 70.2%, 73.8%, and 71.2%, respectively. The classification accuracies for Classifier C using five ML algorithms are 68.1%, 64.9%, 68.4%, 71.5%, and 69.8%, respectively. SVMs with Gaussian Radial Basis Kernel outperform other ML algorithms for gesture-based password authentication. Results confirm that the shorter the duration of the shoulder-surfing attack, the higher the authentication accuracy. In conclusion, behavioral features extracted from the gesture-based passwords lead to less vulnerable user authentication.

Keywords: authentication, gesture-based passwords, machine learning algorithms, shoulder-surfing attacks, usability

Procedia PDF Downloads 90
384 Self-Supervised Learning for Hate-Speech Identification

Authors: Shrabani Ghosh

Abstract:

Automatic offensive language detection in social media has become a stirring task in today's NLP. Manual Offensive language detection is tedious and laborious work where automatic methods based on machine learning are only alternatives. Previous works have done sentiment analysis over social media in different ways such as supervised, semi-supervised, and unsupervised manner. Domain adaptation in a semi-supervised way has also been explored in NLP, where the source domain and the target domain are different. In domain adaptation, the source domain usually has a large amount of labeled data, while only a limited amount of labeled data is available in the target domain. Pretrained transformers like BERT, RoBERTa models are fine-tuned to perform text classification in an unsupervised manner to perform further pre-train masked language modeling (MLM) tasks. In previous work, hate speech detection has been explored in Gab.ai, which is a free speech platform described as a platform of extremist in varying degrees in online social media. In domain adaptation process, Twitter data is used as the source domain, and Gab data is used as the target domain. The performance of domain adaptation also depends on the cross-domain similarity. Different distance measure methods such as L2 distance, cosine distance, Maximum Mean Discrepancy (MMD), Fisher Linear Discriminant (FLD), and CORAL have been used to estimate domain similarity. Certainly, in-domain distances are small, and between-domain distances are expected to be large. The previous work finding shows that pretrain masked language model (MLM) fine-tuned with a mixture of posts of source and target domain gives higher accuracy. However, in-domain performance of the hate classifier on Twitter data accuracy is 71.78%, and out-of-domain performance of the hate classifier on Gab data goes down to 56.53%. Recently self-supervised learning got a lot of attention as it is more applicable when labeled data are scarce. Few works have already been explored to apply self-supervised learning on NLP tasks such as sentiment classification. Self-supervised language representation model ALBERTA focuses on modeling inter-sentence coherence and helps downstream tasks with multi-sentence inputs. Self-supervised attention learning approach shows better performance as it exploits extracted context word in the training process. In this work, a self-supervised attention mechanism has been proposed to detect hate speech on Gab.ai. This framework initially classifies the Gab dataset in an attention-based self-supervised manner. On the next step, a semi-supervised classifier trained on the combination of labeled data from the first step and unlabeled data. The performance of the proposed framework will be compared with the results described earlier and also with optimized outcomes obtained from different optimization techniques.

Keywords: attention learning, language model, offensive language detection, self-supervised learning

Procedia PDF Downloads 95
383 Differences Between Mother and Father Perpetrators on Child Maltreatment Foster Care Outcomes: An Emphasis on Hispanic and Native American Families

Authors: Yadira Tejeda, Wynette Whitegoat, Dylan Jones, Brett Drake

Abstract:

Background and Purpose: Hispanic and American Indian/Alaska Native (AI/AN) families impacted by child protective services (CPS) continue to be a population in literature where little is known. There is less known about the fathers of these children and the safety or risk factors attributed to child maltreatment and case outcomes. However, it is known that involving fathers in children’s lives is needed for healthy development, academic achievement, and cognitive development. The few articles that have studied the impacts of engaging fathers in the CPS have found that children in general experience shorter times in foster care, are likely to reunify with their biological family, and overall have better case outcomes. The purpose of this study is to determine whether perpetrators identified as the mother, father, or both impact foster care placement in Hispanic and AI/AN families in CPS. Methods: Using NCANDS Child File data, the selected reports submitted in FY2017 with at least one substantiated allegation, i.e. those with perpetrator information. Reports were categorized into one of three categories: mom-perpetrator-only, father-perpetrator-only, and both. Reports that did not fall into any one of these three categorizations were omitted (<18%). Lastly, only reports where the mother and father self-identified as Hispanic or AI/AN were kept. Foster care placement was measured if any child in the report was placed within three months of the report date. Multilevel Logistic Regression models (random intercepts at the state and county) were used to model the relationship between report-parent type and foster care placement. Controls included Maltreatment types, number of children, any prior reports, and age of the youngest child. Results: For AI/AN reports, 64% were mom-perpetrator-only, 20% were father-perpetrator-only, and 16% both. Father-perpetrator-only reports had 60% lower odds of placement than mom-perpetrator-only, and both had 35% greater odds than mom-only. For Hispanics, 51% were mom-perpetrator-only, 30% father-perpetrator-only, and 19% both. Father-perpetrator-only reports had 74% lower odds than mom-perpetrator-only, and both had 55% greater odds than mom-perpetrator-only. Conclusion and Implications: Fatherhood research focused on prevention and intervention services should include Hispanic and AI/AN fathers to create culturally relevant and tailored services for both groups. By identifying differences in children’s CPS trajectories conditional on fathers’ involvement as a perpetrator, this analysis helps to inform where and how prevention efforts should be focused when considering variation in parental involvement for both populations. The findings indicate that the father’s involvement predicts substantial differences in the probability of future placement, with the direction depending on the mother’s joint involvement. Future research should investigate mediating pathways of these relationships while accounting for the unique experiences of AI/AN and Hispanic families. Each of these racial groups faces unique and differing challenges related to CPS, yet both groups have a shared understanding of the importance of fatherhood in the lives of children. Developing a better understanding of what is happening with Hispanic and AI/AN fathers as it relates to children's CPS experiences may result in new tools to reduce child maltreatment rates in these communities.

Keywords: child Abuse, child maltreatment, NDACAN, latino, native American

Procedia PDF Downloads 21
382 Fuel Cells Not Only for Cars: Technological Development in Railways

Authors: Marita Pigłowska, Beata Kurc, Paweł Daszkiewicz

Abstract:

Railway vehicles are divided into two groups: traction (powered) vehicles and wagons. The traction vehicles include locomotives (line and shunting), railcars (sometimes referred to as railbuses), and multiple units (electric and diesel), consisting of several or a dozen carriages. In vehicles with diesel traction, fuel energy (petrol, diesel, or compressed gas) is converted into mechanical energy directly in the internal combustion engine or via electricity. In the latter case, the combustion engine generator produces electricity that is then used to drive the vehicle (diesel-electric drive or electric transmission). In Poland, such a solution dominates both in heavy linear and shunting locomotives. The classic diesel drive is available for the lightest shunting locomotives, railcars, and passenger diesel multiple units. Vehicles with electric traction do not have their own source of energy -they use pantographs to obtain electricity from the traction network. To determine the competitiveness of the hydrogen propulsion system, it is essential to understand how it works. The basic elements of the construction of a railway vehicle drive system that uses hydrogen as a source of traction force are fuel cells, batteries, fuel tanks, traction motors as well as main and auxiliary converters. The compressed hydrogen is stored in tanks usually located on the roof of the vehicle. This resource is supplemented with the use of specialized infrastructure while the vehicle is stationary. Hydrogen is supplied to the fuel cell, where it oxidizes. The effect of this chemical reaction is electricity and water (in two forms -liquid and water vapor). Electricity is stored in batteries (so far, lithium-ion batteries are used). Electricity stored in this way is used to drive traction motors and supply onboard equipment. The current generated by the fuel cell passes through the main converter, whose task is to adjust it to the values required by the consumers, i.e., batteries and the traction motor. The work will attempt to construct a fuel cell with unique electrodes. This research is a trend that connects industry with science. The first goal will be to obtain hydrogen on a large scale in tube furnaces, to thoroughly analyze the obtained structures (IR), and to apply the method in fuel cells. The second goal is to create low-energy energy storage and distribution station for hydrogen and electric vehicles. The scope of the research includes obtaining a carbon variety and obtaining oxide systems on a large scale using a tubular furnace and then supplying vehicles. Acknowledgments: This work is supported by the Polish Ministry of Science and Education, project "The best of the best! 4.0", number 0911/MNSW/4968 – M.P. and grant 0911/SBAD/2102—B.K.

Keywords: railway, hydrogen, fuel cells, hybrid vehicles

Procedia PDF Downloads 175
381 Finite Element Modeling and Analysis of Reinforced Concrete Coupled Shear Walls Strengthened with Externally Bonded Carbon Fiber Reinforced Polymer Composites

Authors: Sara Honarparast, Omar Chaallal

Abstract:

Reinforced concrete (RC) coupled shear walls (CSWs) are very effective structural systems in resisting lateral loads due to winds and earthquakes and are particularly used in medium- to high-rise RC buildings. However, most of existing old RC structures were designed for gravity loads or lateral loads well below the loads specified in the current modern seismic international codes. These structures may behave in non-ductile manner due to poorly designed joints, insufficient shear reinforcement and inadequate anchorage length of the reinforcing bars. This has been the main impetus to investigate an appropriate strengthening method to address or attenuate the deficiencies of these structures. The objective of this paper is to twofold: (i) evaluate the seismic performance of existing reinforced concrete coupled shear walls under reversed cyclic loading; and (ii) investigate the seismic performance of RC CSWs strengthened with externally bonded (EB) carbon fiber reinforced polymer (CFRP) sheets. To this end, two CSWs were considered as follows: (a) the first one is representative of old CSWs and therefore was designed according to the 1941 National Building Code of Canada (NBCC, 1941) with conventionally reinforced coupling beams; and (b) the second one, representative of new CSWs, was designed according to modern NBCC 2015 and CSA/A23.3 2014 requirements with diagonally reinforced coupling beam. Both CSWs were simulated using ANSYS software. Nonlinear behavior of concrete is modeled using multilinear isotropic hardening through a multilinear stress strain curve. The elastic-perfectly plastic stress-strain curve is used to simulate the steel material. Bond stress–slip is modeled between concrete and steel reinforcement in conventional coupling beam rather than considering perfect bond to better represent the slip of the steel bars observed in the coupling beams of these CSWs. The old-designed CSW was strengthened using CFRP sheets bonded to the concrete substrate and the interface was modeled using an adhesive layer. The behavior of CFRP material is considered linear elastic up to failure. After simulating the loading and boundary conditions, the specimens are analyzed under reversed cyclic loading. The comparison of results obtained for the two unstrengthened CSWs and the one retrofitted with EB CFRP sheets reveals that the strengthening method improves the seismic performance in terms of strength, ductility, and energy dissipation capacity.

Keywords: carbon fiber reinforced polymer, coupled shear wall, coupling beam, finite element analysis, modern code, old code, strengthening

Procedia PDF Downloads 186
380 Determinants of Life Satisfaction in Canada: A Causal Modelling Approach

Authors: Rose Branch-Allen, John Jayachandran

Abstract:

Background and purpose: Canada is a pluralistic, multicultural society with an ethno-cultural composition that has been shaped over time by immigrants and their descendants. Although Canada welcomes these immigrants, many will endure hardship and assimilation difficulties. Despite these life hurdles, surveys consistently disclose high life satisfaction for all Canadians. Most research studies on Life Satisfaction/ Subjective Wellbeing (SWB) have focused on one main determinant and a variety of social demographic variables to delineate the determinants of life satisfaction. However, very few research studies examine life satisfaction from a holistic approach. In addition, we need to understand the causal pathways leading to life satisfaction, and develop theories that explain why certain variables differentially influence the different components of SWB. The aim this study was to utilize a holistic approach to construct a causal model and identify major determinants of life satisfaction. Data and measures: This study utilized data from the General Social Survey, with a sample size of 19, 597. The exogenous concepts included age, gender, marital status, household size, socioeconomic status, ethnicity, location, immigration status, religiosity, and neighborhood. The intervening concepts included health, social contact, leisure, enjoyment, work-family balance, quality time, domestic labor, and sense of belonging. The endogenous concept life satisfaction was measured by multiple indicators (Cronbach’s alpha = .83). Analysis: Several multiple regression models were run sequentially to estimate path coefficients for the causal model. Results: Overall, above average satisfaction with life was reported for respondents with specific socio-economic, demographic and lifestyle characteristics. With regard to exogenous factors, respondents who were female, younger, married, from high socioeconomic status background, born in Canada, very religious, and demonstrated high level of neighborhood interaction had greater satisfaction with life. Similarly, intervening concepts suggested respondents had greater life satisfaction if they had better health, more social contact, less time on passive leisure activities and more time on active leisure activities, more time with family and friends, more enjoyment with volunteer activities, less time on domestic labor and a greater sense of belonging to the community. Conclusions and Implications: Our results suggest that a holistic approach is necessary for establishing determinants of life satisfaction, and that life satisfaction is not merely comprised of positive or negative affect rather understanding the causal process of life satisfaction. Even though, most of our findings are consistent with previous studies, a significant number of causal connections contradict some of the findings in literature today. We have provided possible explanation for these anomalies researchers encounter in studying life satisfaction and policy implications.

Keywords: causal model, holistic approach, life satisfaction, socio-demographic variables, subjective well-being

Procedia PDF Downloads 346
379 Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines

Authors: Alexander Guzman Urbina, Atsushi Aoyama

Abstract:

The sustainability of traditional technologies employed in energy and chemical infrastructure brings a big challenge for our society. Making decisions related with safety of industrial infrastructure, the values of accidental risk are becoming relevant points for discussion. However, the challenge is the reliability of the models employed to get the risk data. Such models usually involve large number of variables and with large amounts of uncertainty. The most efficient techniques to overcome those problems are built using Artificial Intelligence (AI), and more specifically using hybrid systems such as Neuro-Fuzzy algorithms. Therefore, this paper aims to introduce a hybrid algorithm for risk assessment trained using near-miss accident data. As mentioned above the sustainability of traditional technologies related with energy and chemical infrastructure constitutes one of the major challenges that today’s societies and firms are facing. Besides that, the adaptation of those technologies to the effects of the climate change in sensible environments represents a critical concern for safety and risk management. Regarding this issue argue that social consequences of catastrophic risks are increasing rapidly, due mainly to the concentration of people and energy infrastructure in hazard-prone areas, aggravated by the lack of knowledge about the risks. Additional to the social consequences described above, and considering the industrial sector as critical infrastructure due to its large impact to the economy in case of a failure the relevance of industrial safety has become a critical issue for the current society. Then, regarding the safety concern, pipeline operators and regulators have been performing risk assessments in attempts to evaluate accurately probabilities of failure of the infrastructure, and consequences associated with those failures. However, estimating accidental risks in critical infrastructure involves a substantial effort and costs due to number of variables involved, complexity and lack of information. Therefore, this paper aims to introduce a well trained algorithm for risk assessment using deep learning, which could be capable to deal efficiently with the complexity and uncertainty. The advantage point of the deep learning using near-miss accidents data is that it could be employed in risk assessment as an efficient engineering tool to treat the uncertainty of the risk values in complex environments. The basic idea of using a Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines is focused in the objective of improve the validity of the risk values learning from near-miss accidents and imitating the human expertise scoring risks and setting tolerance levels. In summary, the method of Deep Learning for Neuro-Fuzzy Risk Assessment involves a regression analysis called group method of data handling (GMDH), which consists in the determination of the optimal configuration of the risk assessment model and its parameters employing polynomial theory.

Keywords: deep learning, risk assessment, neuro fuzzy, pipelines

Procedia PDF Downloads 284
378 Temperature-Dependent Post-Mortem Changes in Human Cardiac Troponin-T (cTnT): An Approach in Determining Postmortem Interval

Authors: Sachil Kumar, Anoop Kumar Verma, Wahid Ali, Uma Shankar Singh

Abstract:

Globally approximately 55.3 million people die each year. In the India there were 95 lakh annual deaths in 2013. The number of deaths resulted from homicides, suicides and unintentional injuries in the same period was about 5.7 lakh. The ever-increasing crime rate necessitated the development of methods for determining time since death. An erroneous time of death window can lead investigators down the wrong path or possibly focus a case on an innocent suspect. In this regard a research was carried out by analyzing the temperature dependent degradation of a Cardiac Troponin-T protein (cTnT) in the myocardium postmortem as a marker for time since death. Cardiac tissue samples were collected from (n=6) medico-legal autopsies, (in the Department of Forensic Medicine and Toxicology, King George’s Medical University, Lucknow India) after informed consent from the relatives and studied post-mortem degradation by incubation of the cardiac tissue at room temperature (20±2 OC), 12 0C, 25 0C and 37 0C for different time periods ((~5, 26, 50, 84, 132, 157, 180, 205, and 230 hours). The cases included were the subjects of road traffic accidents (RTA) without any prior history of disease who died in the hospital and their exact time of death was known. The analysis involved extraction of the protein, separation by denaturing gel electrophoresis (SDS-PAGE) and visualization by Western blot using cTnT specific monoclonal antibodies. The area of the bands within a lane was quantified by scanning and digitizing the image using Gel Doc. The data shows a distinct temporal profile corresponding to the degradation of cTnT by proteases found in cardiac muscle. The disappearance of intact cTnT and the appearance of lower molecular weight bands are easily observed. Western blot data clearly showed the intact protein at 42 kDa, two major (27 kDa, 10kDa) fragments, two additional minor fragments (32 kDa) and formation of low molecular weight fragments as time increases. At 12 0C the intensity of band (intact cTnT) decreased steadily as compared to RT, 25 0C and 37 0C. Overall, both PMI and temperature had a statistically significant effect where the greatest amount of protein breakdown was observed within the first 38 h and at the highest temperature, 37 0C. The combination of high temperature (37 0C) and long Postmortem interval (105.15 hrs) had the most drastic effect on the breakdown of cTnT. If the percent intact cTnT is calculated from the total area integrated within a Western blot lane, then the percent intact cTnT shows a pseudo-first order relationship when plotted against the log of the time postmortem. These plots show a good coefficient of correlation of r = 0.95 (p=0.003) for the regression of the human heart at different temperature conditions. The data presented demonstrates that this technique can provide an extended time range during which Postmortem interval can be more accurately estimated.

Keywords: degradation, postmortem interval, proteolysis, temperature, troponin

Procedia PDF Downloads 372
377 Collaboration between Grower and Research Organisations as a Mechanism to Improve Water Efficiency in Irrigated Agriculture

Authors: Sarah J. C. Slabbert

Abstract:

The uptake of research as part of the diffusion or adoption of innovation by practitioners, whether individuals or organisations, has been a popular topic in agricultural development studies for many decades. In the classical, linear model of innovation theory, the innovation originates from an expert source such as a state-supported research organisation or academic institution. The changing context of agriculture led to the development of the agricultural innovation systems model, which recognizes innovation as a complex interaction between individuals and organisations, which include private industry and collective action organisations. In terms of this model, an innovation can be developed and adopted without any input or intervention from a state or parastatal research organisation. This evolution in the diffusion of agricultural innovation has put forward new challenges for state or parastatal research organisations, which have to demonstrate the impact of their research to the legislature or a regulatory authority: Unless the organisation and the research it produces cross the knowledge paths of the intended audience, there will be no awareness, no uptake and certainly no impact. It is therefore critical for such a research organisation to base its communication strategy on a thorough understanding of the knowledge needs, information sources and knowledge networks of the intended target audience. In 2016, the South African Water Research Commission (WRC) commissioned a study to investigate the knowledge needs, information sources and knowledge networks of Water User Associations and commercial irrigators with the aim of improving uptake of its research on efficient water use in irrigation. The first phase of the study comprised face-to-face interviews with the CEOs and Board Chairs of four Water User Associations along the Orange River in South Africa, and 36 commercial irrigation farmers from the same four irrigation schemes. Intermediaries who act as knowledge conduits to the Water User Associations and the irrigators were identified and 20 of them were subsequently interviewed telephonically. The study found that irrigators interact regularly with grower organisations such as SATI (South African Table Grape Industry) and SAPPA (South African Pecan Nut Association) and that they perceive these organisations as credible, trustworthy and reliable, within their limitations. State and parastatal research institutions, on the other hand, are associated with a range of negative attributes. As a result, the awareness of, and interest in, the WRC and its research on water use efficiency in irrigated agriculture are low. The findings suggest that a communication strategy that involves collaboration with these grower organisations would empower the WRC to participate much more efficiently and with greater impact in agricultural innovation networks. The paper will elaborate on the findings and discuss partnering frameworks and opportunities to manage perceptions and uptake.

Keywords: agricultural innovation systems, communication strategy, diffusion of innovation, irrigated agriculture, knowledge paths, research organisations, target audiences, water use efficiency

Procedia PDF Downloads 100
376 Using ANN in Emergency Reconstruction Projects Post Disaster

Authors: Rasha Waheeb, Bjorn Andersen, Rafa Shakir

Abstract:

Purpose The purpose of this study is to avoid delays that occur in emergency reconstruction projects especially in post disaster circumstances whether if they were natural or manmade due to their particular national and humanitarian importance. We presented a theoretical and practical concepts for projects management in the field of construction industry that deal with a range of global and local trails. This study aimed to identify the factors of effective delay in construction projects in Iraq that affect the time and the specific quality cost, and find the best solutions to address delays and solve the problem by setting parameters to restore balance in this study. 30 projects were selected in different areas of construction were selected as a sample for this study. Design/methodology/approach This study discusses the reconstruction strategies and delay in time and cost caused by different delay factors in some selected projects in Iraq (Baghdad as a case study).A case study approach was adopted, with thirty construction projects selected from the Baghdad region, of different types and sizes. Project participants from the case projects provided data about the projects through a data collection instrument distributed through a survey. Mixed approach and methods were applied in this study. Mathematical data analysis was used to construct models to predict delay in time and cost of projects before they started. The artificial neural networks analysis was selected as a mathematical approach. These models were mainly to help decision makers in construction project to find solutions to these delays before they cause any inefficiency in the project being implemented and to strike the obstacles thoroughly to develop this industry in Iraq. This approach was practiced using the data collected through survey and questionnaire data collection as information form. Findings The most important delay factors identified leading to schedule overruns were contractor failure, redesigning of designs/plans and change orders, security issues, selection of low-price bids, weather factors, and owner failures. Some of these are quite in line with findings from similar studies in other countries/regions, but some are unique to the Iraqi project sample, such as security issues and low-price bid selection. Originality/value we selected ANN’s analysis first because ANN’s was rarely used in project management , and never been used in Iraq to finding solutions for problems in construction industry. Also, this methodology can be used in complicated problems when there is no interpretation or solution for a problem. In some cases statistical analysis was conducted and in some cases the problem is not following a linear equation or there was a weak correlation, thus we suggested using the ANN’s because it is used for nonlinear problems to find the relationship between input and output data and that was really supportive.

Keywords: construction projects, delay factors, emergency reconstruction, innovation ANN, post disasters, project management

Procedia PDF Downloads 153
375 Multi-Objectives Genetic Algorithm for Optimizing Machining Process Parameters

Authors: Dylan Santos De Pinho, Nabil Ouerhani

Abstract:

Energy consumption of machine-tools is becoming critical for machine-tool builders and end-users because of economic, ecological and legislation-related reasons. Many machine-tool builders are seeking for solutions that allow the reduction of energy consumption of machine-tools while preserving the same productivity rate and the same quality of machined parts. In this paper, we present the first results of a project conducted jointly by academic and industrial partners to reduce the energy consumption of a Swiss-Type lathe. We employ genetic algorithms to find optimal machining parameters – the set of parameters that lead to the best trade-off between energy consumption, part quality and tool lifetime. Three main machining process parameters are considered in our optimization technique, namely depth of cut, spindle rotation speed and material feed rate. These machining process parameters have been identified as the most influential ones in the configuration of the Swiss-type machining process. A state-of-the-art multi-objective genetic algorithm has been used. The algorithm combines three fitness functions, which are objective functions that permit to evaluate a set of parameters against the three objectives: energy consumption, quality of the machined parts, and tool lifetime. In this paper, we focus on the investigation of the fitness function related to energy consumption. Four different energy consumption related fitness functions have been investigated and compared. The first fitness function refers to the Kienzle cutting force model. The second fitness function uses the Material Removal Rate (RMM) as an indicator of energy consumption. The two other fitness functions are non-deterministic, learning-based functions. One fitness function uses a simple Neural Network to learn the relation between the process parameters and the energy consumption from experimental data. Another fitness function uses Lasso regression to determine the same relation. The goal is, then, to find out which fitness functions predict best the energy consumption of a Swiss-Type machining process for the given set of machining process parameters. Once determined, these functions may be used for optimization purposes – determine the optimal machining process parameters leading to minimum energy consumption. The performance of the four fitness functions has been evaluated. The Tornos DT13 Swiss-Type Lathe has been used to carry out the experiments. A mechanical part including various Swiss-Type machining operations has been selected for the experiments. The evaluation process starts with generating a set of CNC (Computer Numerical Control) programs for machining the part at hand. Each CNC program considers a different set of machining process parameters. During the machining process, the power consumption of the spindle is measured. All collected data are assigned to the appropriate CNC program and thus to the set of machining process parameters. The evaluation approach consists in calculating the correlation between the normalized measured power consumption and the normalized power consumption prediction for each of the four fitness functions. The evaluation shows that the Lasso and Neural Network fitness functions have the highest correlation coefficient with 97%. The fitness function “Material Removal Rate” (MRR) has a correlation coefficient of 90%, whereas the Kienzle-based fitness function has a correlation coefficient of 80%.

Keywords: adaptive machining, genetic algorithms, smart manufacturing, parameters optimization

Procedia PDF Downloads 137
374 European Electromagnetic Compatibility Directive Applied to Astronomical Observatories

Authors: Oibar Martinez, Clara Oliver

Abstract:

The Cherenkov Telescope Array Project (CTA) aims to build two different observatories of Cherenkov Telescopes, located in Cerro del Paranal, Chile, and La Palma, Spain. These facilities are used in this paper as a case study to investigate how to apply standard Directives on Electromagnetic Compatibility to astronomical observatories. Cherenkov Telescopes are able to provide valuable information from both Galactic and Extragalactic sources by measuring Cherenkov radiation, which is produced by particles which travel faster than light in the atmosphere. The construction requirements demand compliance with the European Electromagnetic Compatibility Directive. The largest telescopes of these observatories, called Large Scale Telescopes (LSTs), are high precision instruments with advanced photomultipliers able to detect the faint sub-nanosecond blue light pulses produced by Cherenkov Radiation. They have a 23-meter parabolic reflective surface. This surface focuses the radiation on a camera composed of an array of high-speed photosensors which are highly sensitive to the radio spectrum pollution. The camera has a field of view of about 4.5 degrees and has been designed for maximum compactness and lowest weight, cost and power consumption. Each pixel incorporates a photo-sensor able to discriminate single photons and the corresponding readout electronics. The first LST is already commissioned and intends to be operated as a service to Scientific Community. Because of this, it must comply with a series of reliability and functional requirements and must have a Conformité Européen (CE) marking. This demands compliance with Directive 2014/30/EU on electromagnetic compatibility. The main difficulty of accomplishing this goal resides on the fact that Conformité Européen marking setups and procedures were implemented for industrial products, whereas no clear protocols have been defined for scientific installations. In this paper, we aim to give an answer to the question on how the directive should be applied to our installation to guarantee the fulfillment of all the requirements and the proper functioning of the telescope itself. Experts in Optics and Electromagnetism were both needed to make these kinds of decisions and match tests which were designed to be made over the equipment of limited dimensions on large scientific plants. An analysis of the elements and configurations most likely to be affected by external interferences and those that are most likely to cause the maximum disturbances was also performed. Obtaining the Conformité Européen mark requires knowing what the harmonized standards are and how the elaboration of the specific requirement is defined. For this type of large installations, one needs to adapt and develop the tests to be carried out. In addition, throughout this process, certification entities and notified bodies play a key role in preparing and agreeing the required technical documentation. We have focused our attention mostly on the technical aspects of each point. We believe that this contribution will be of interest for other scientists involved in applying industrial quality assurance standards to large scientific plant.

Keywords: CE marking, electromagnetic compatibility, european directive, scientific installations

Procedia PDF Downloads 100
373 Biophysical and Structural Characterization of Transcription Factor Rv0047c of Mycobacterium Tuberculosis H37Rv

Authors: Md. Samsuddin Ansari, Ashish Arora

Abstract:

Every year 10 million people fall ill with one of the oldest diseases known as tuberculosis, caused by Mycobacterium tuberculosis. The success of M. tuberculosis as a pathogen is because of its ability to persist in host tissues. Multidrug resistance (MDR) mycobacteria cases increase every day, which is associated with efflux pumps controlled at the level of transcription. The transcription regulators of MDR transporters in bacteria belong to one of the following four regulatory protein families: AraC, MarR, MerR, and TetR. Phenolic acid decarboxylase repressor (PadR), like a family of transcription regulators, is closely related to the MarR family. Phenolic acid decarboxylase repressor (PadR) was first identified as a transcription factor involved in the regulation of phenolic acid stress response in various microorganisms (including Mycobacterium tuberculosis H37Rv). Recently research has shown that the PadR family transcription factors are global, multifunction transcription regulators. Rv0047c is a PadR subfamily-1 protein. We are exploring the biophysical and structural characterization of Rv0047c. The Rv0047 gene was amplified by PCR using the primers containing EcoRI and HindIII restriction enzyme sites cloned in pET-NH6 vector and overexpressed in DH5α and BL21 (λDE3) cells of E. coli following purification with Ni2+-NTA column and size exclusion chromatography. We did DSC to know the thermal stability; the Tm (transition temperature) of protein is 55.29ºC, and ΔH (enthalpy change) of 6.92 kcal/mol. Circular dichroism to know the secondary structure and conformation and fluorescence spectroscopy for tertiary structure study of protein. To understand the effect of pH on the structure, function, and stability of Rv0047c we employed spectroscopy techniques such as circular dichroism, fluorescence, and absorbance measurements in a wide range of pH (from pH-2.0 to pH-12). At low and high pH, it shows drastic changes in the secondary and tertiary structure of the protein. EMSA studies showed the specific binding of Rv0047c with its own 30-bp promoter region. To determine the effect of complex formation on the secondary structure of Rv0047c, we examined the CD spectra of the complex of Rv0047c with promoter DNA of rv0047. The functional role of Rv0047c was characterized by over-expressing the Rv0047c gene under the control of hsp60 promoter in Mycobacterium tuberculosis H37Rv. We have predicted the three-dimensional structure of Rv0047c using the Swiss Model and Modeller, with validity checked by the Ramachandra plot. We did molecular docking of Rv0047c with dnaA, through PatchDock following refinement through FireDock. Through this, it is possible to easily identify the binding hot-stop of the receptor molecule with that of the ligand, the nature of the interface itself, and the conformational change undergone by the protein pattern. We are using X-crystallography to unravel the structure of Rv0047c. Overall the studies show that Rv0047c may have transcription regulation along with providing an insight into the activity of Rv0047c in the pH range of subcellular environment and helps to understand the protein-protein interaction, a novel target to kill dormant bacteria and potential strategy for tuberculosis control.

Keywords: mycobacterium tuberculosis, phenolic acid decarboxylase repressor, Rv0047c, Circular dichroism, fluorescence spectroscopy, docking, protein-protein interaction

Procedia PDF Downloads 105
372 Comparative Analyses of Prevalence of Intimate Partner Violence in Ten Developing Countries: Evidence from Nationally Representative Surveys

Authors: Elena Chernyak, Ryan Ceresola

Abstract:

Intimate partner violence is a serious social problem that affects a million women worldwide and impacts their health and wellbeing. Some risk factors for intimate partner violence against women (e.g., disobeying or arguing with a partner, women’s age, education, and employment) are similar in many countries, both developed and developing. However, one of the principal and most significant contributors to women’s vulnerability to violence perpetrated by their intimate partners is the witnessing of interparental aggression in the family of origin. Witnessing interparental violence may lead to acceptance of intimate partner violence as a normal way to resolve conflicts. Thus, utilization of violence becomes the behavioral model: men who witnessed the parental violence are more likely to employ physical violence against their female partners whereas women who observed their fathers beating their mothers learn to tolerate aggressive behavior and become victims of domestic violence themselves. Taking into consideration the importance of this subject matter, the association between witnessing intimate partner violence in family-of-origin and experience of intimate partner violence in adulthood requires further attention. The objective of this research is to analyze and compare the prevalence of intimate partner violence in ten developing countries in different regions, namely: Mali, Haiti, Jordan, Peru, the Philippines, Pakistan, Cambodia, Egypt, the Dominican Republic and Nigeria. Specifically, this research asks whether witnessing interparental violence in a family of origin is associated with the woman’s experience of intimate partner violence during adulthood and to what extent this factor varies among the countries under investigation. This study contributes to the literature on domestic violence against women, prevalence and experience of intimate partner violence against women in developing countries, and the risk factors, using recently collected, nationally representative population-based data from above-mentioned countries. The data used in this research are derived from the demographic and health surveys conducted in the ten mentioned above countries from 2013-2016. These surveys are cross-sectional, nationally representative surveys of ever-married or cohabitating women of reproductive age and the good source of high quality and comprehensive information about women, their children, partners, and households. To complete this analysis, multivariate logistic regression was run for each of the countries, and the results are presented with odds ratios, in order to highlight the effect of witnessing intimate partner violence controlling for other factors. The results of this study indicated that having witnessed partner violence in a family of origin significantly (by 50-500%) increases the likelihood of experiencing later abuse for respondents in all countries. This finding provides robust support for the intergenerational transmission of violence theory that explains the link between interparental aggression and intimate partner violence in subsequent relationships in adulthood as a result of a learned model of behavior observed in childhood. Furthermore, it was found that some of the control variables (e.g., education, number of children, and wealth) are associated with intimate partner violence in some countries under investigation while are not associated with male partner’s abusive behavior in some other, which may be explained by specific cultural and economic factors.

Keywords: intimate partner violence, domestic violence against women, developing countries, demographic and health surveys, risk factors

Procedia PDF Downloads 129
371 Correlation Analysis between Sensory Processing Sensitivity (SPS), Meares-Irlen Syndrome (MIS) and Dyslexia

Authors: Kaaryn M. Cater

Abstract:

Students with sensory processing sensitivity (SPS), Meares-Irlen Syndrome (MIS) and dyslexia can become overwhelmed and struggle to thrive in traditional tertiary learning environments. An estimated 50% of tertiary students who disclose learning related issues are dyslexic. This study explores the relationship between SPS, MIS and dyslexia. Baseline measures will be analysed to establish any correlation between these three minority methods of information processing. SPS is an innate sensitivity trait found in 15-20% of the population and has been identified in over 100 species of animals. Humans with SPS are referred to as Highly Sensitive People (HSP) and the measure of HSP is a 27 point self-test known as the Highly Sensitive Person Scale (HSPS). A 2016 study conducted by the author established base-line data for HSP students in a tertiary institution in New Zealand. The results of the study showed that all participating HSP students believed the knowledge of SPS to be life-changing and useful in managing life and study, in addition, they believed that all tutors and in-coming students should be given information on SPS. MIS is a visual processing and perception disorder that is found in approximately 10% of the population and has a variety of symptoms including visual fatigue, headaches and nausea. One way to ease some of these symptoms is through the use of colored lenses or overlays. Dyslexia is a complex phonological based information processing variation present in approximately 10% of the population. An estimated 50% of dyslexics are thought to have MIS. The study exploring possible correlations between these minority forms of information processing is due to begin in February 2017. An invitation will be extended to all first year students enrolled in degree programmes across all faculties and schools within the institution. An estimated 900 students will be eligible to participate in the study. Participants will be asked to complete a battery of on-line questionnaires including the Highly Sensitive Person Scale, the International Dyslexia Association adult self-assessment and the adapted Irlen indicator. All three scales have been used extensively in literature and have been validated among many populations. All participants whose score on any (or some) of the three questionnaires suggest a minority method of information processing will receive an invitation to meet with a learning advisor, and given access to counselling services if they choose. Meeting with a learning advisor is not mandatory, and some participants may choose not to receive help. Data will be collected using the Question Pro platform and base-line data will be analysed using correlation and regression analysis to identify relationships and predictors between SPS, MIS and dyslexia. This study forms part of a larger three year longitudinal study and participants will be required to complete questionnaires at annual intervals in subsequent years of the study until completion of (or withdrawal from) their degree. At these data collection points, participants will be questioned on any additional support received relating to their minority method(s) of information processing. Data from this study will be available by April 2017.

Keywords: dyslexia, highly sensitive person (HSP), Meares-Irlen Syndrome (MIS), minority forms of information processing, sensory processing sensitivity (SPS)

Procedia PDF Downloads 222
370 Immersive and Non-Immersive Virtual Reality Applied to the Cervical Spine Assessment

Authors: Pawel Kiper, Alfonc Baba, Mahmoud Alhelou, Giorgia Pregnolato, Michela Agostini, Andrea Turolla

Abstract:

Impairment of cervical spine mobility is often related to pain triggered by musculoskeletal disorders or direct traumatic injuries of the spine. To date, these disorders are assessed with goniometers and inclinometers, which are the most popular devices used in clinical settings. Nevertheless, these technologies usually allow measurement of no more than two-dimensional range of motion (ROM) quotes in static conditions. Conversely, the wide use of motion tracking systems able to measure 3 to 6 degrees of freedom dynamically, while performing standard ROM assessment, are limited due to technical complexities in preparing the setup and high costs. Thus, motion tracking systems are primarily used in research. These systems are an integral part of virtual reality (VR) technologies, which can be used for measuring spine mobility. To our knowledge, the accuracy of VR measure has not yet been studied within virtual environments. Thus, the aim of this study was to test the reliability of a protocol for the assessment of sensorimotor function of the cervical spine in a population of healthy subjects and to compare whether using immersive or non-immersive VR for visualization affects the performance. Both VR assessments consisted of the same five exercises and random sequence determined which of the environments (i.e. immersive or non-immersive) was used as first assessment. Subjects were asked to perform head rotation (right and left), flexion, extension and lateral flexion (right and left side bending). Each movement was executed five times. Moreover, the participants were invited to perform head reaching movements i.e. head movements toward 8 targets placed along a circular perimeter each 45°, visualized one-by-one in random order. Finally, head repositioning movement was obtained by head movement toward the same 8 targets as for reaching and following reposition to the start point. Thus, each participant performed 46 tasks during assessment. Main measures were: ROM of rotation, flexion, extension, lateral flexion and complete kinematics of the cervical spine (i.e. number of completed targets, time of execution (seconds), spatial length (cm), angle distance (°), jerk). Thirty-five healthy participants (i.e. 14 males and 21 females, mean age 28.4±6.47) were recruited for the cervical spine assessment with immersive and non-immersive VR environments. Comparison analysis demonstrated that: head right rotation (p=0.027), extension (p=0.047), flexion (p=0.000), time (p=0.001), spatial length (p=0.004), jerk target (p=0.032), trajectory repositioning (p=0.003), and jerk target repositioning (p=0.007) were significantly better in immersive than non-immersive VR. A regression model showed that assessment in immersive VR was influenced by height, trajectory repositioning (p<0.05), and handedness (p<0.05), whereas in non-immersive VR performance was influenced by height, jerk target (p=0.002), head extension, jerk target repositioning (p=0.002), and by age, head flex/ext, trajectory repositioning, and weight (p=0.040). The results of this study showed higher accuracy of cervical spine assessment when executed in immersive VR. The assessment of ROM and kinematics of the cervical spine can be affected by independent and dependent variables in both immersive and non-immersive VR settings.

Keywords: virtual reality, cervical spine, motion analysis, range of motion, measurement validity

Procedia PDF Downloads 149
369 Development and Validation of a Rapid Turbidimetric Assay to Determine the Potency of Cefepime Hydrochloride in Powder Injectable Solution

Authors: Danilo F. Rodrigues, Hérida Regina N. Salgado

Abstract:

Introduction: The emergence of resistant microorganisms to a large number of clinically approved antimicrobials has been increasing, which restrict the options for the treatment of bacterial infections. As a strategy, drugs with high antimicrobial activities are in evidence. Stands out a class of antimicrobial, the cephalosporins, having as fourth generation cefepime (CEF) a semi-synthetic product which has activity against various Gram-positive bacteria (e.g. oxacillin resistant Staphylococcus aureus) and Gram-negative (e.g. Pseudomonas aeruginosa) aerobic. There are few studies in the literature regarding the development of microbiological methodologies for the analysis of this antimicrobial, so researches in this area are highly relevant to optimize the analysis of this drug in the industry and ensure the quality of the marketed product. The development of microbiological methods for the analysis of antimicrobials has gained strength in recent years and has been highlighted in relation to physicochemical methods, especially because they make possible to determine the bioactivity of the drug against a microorganism. In this context, the aim of this work was the development and validation of a microbiological method for quantitative analysis of CEF in powder lyophilized for injectable solution by turbidimetric assay. Method: For performing the method, Staphylococcus aureus ATCC 6538 IAL 2082 was used as the test microorganism and the culture medium chosen was the Casoy broth. The test was performed using temperature control (35.0 °C ± 2.0 °C) and incubated for 4 hours in shaker. The readings of the results were made at a wavelength of 530 nm through a spectrophotometer. The turbidimetric microbiological method was validated by determining the following parameters: linearity, precision (repeatability and intermediate precision), accuracy and robustness, according to ICH guidelines. Results and discussion: Among the parameters evaluated for method validation, the linearity showed results suitable for both statistical analyses as the correlation coefficients (r) that went 0.9990 for CEF reference standard and 0.9997 for CEF sample. The precision presented the following values 1.86% (intraday), 0.84% (interday) and 0.71% (between analyst). The accuracy of the method has been proven through the recovery test where the mean value obtained was 99.92%. The robustness was verified by the parameters changing volume of culture medium, brand of culture medium, incubation time in shaker and wavelength. The potency of CEF present in the samples of lyophilized powder for injectable solution was 102.46%. Conclusion: The turbidimetric microbiological method proposed for quantification of CEF in lyophilized powder for solution for injectable showed being fast, linear, precise, accurate and robust, being in accordance with all the requirements, which can be used in routine analysis of quality control in the pharmaceutical industry as an option for microbiological analysis.

Keywords: cefepime hydrochloride, quality control, turbidimetric assay, validation

Procedia PDF Downloads 347
368 Effect of Particle Size Variations on the Tribological Properties of Porcelain Waste Added Epoxy Composites

Authors: B. Yaman, G. Acikbas, N. Calis Acikbas

Abstract:

Epoxy based materials have advantages in tribological applications due to their unique properties such as light weight, self-lubrication capacity and wear resistance. On the other hand, their usage is often limited by their low load bearing capacity and low thermal conductivity values. In this study, it is aimed to improve tribological and also mechanical properties of epoxy by reinforcing with ceramic based porcelain waste. It is well-known that the reuse or recycling of waste materials leads to reduction in production costs, ease of manufacturing, saving energy, etc. From this perspective, epoxy and epoxy matrix composites containing 60wt% porcelain waste with different particle size in the range of below 90µm and 150-250µm were fabricated, and the effect of filler particle size on the mechanical and tribological properties was investigated. The microstructural characterization was carried out by scanning electron microscopy (SEM), and phase analysis was determined by X-ray diffraction (XRD). The Archimedes principle was used to measure the density and porosity of the samples. The hardness values were measured using Shore-D hardness, and bending tests were performed. Microstructural investigations indicated that porcelain particles were homogeneously distributed and no agglomerations were encountered in the epoxy resin. Mechanical test results showed that the hardness and bending strength were increased with increasing particle size related to low porosity content and well embedding to the matrix. Tribological behavior of these composites was evaluated in terms of friction, wear rates and wear mechanisms by ball-on-disk contact with dry and rotational sliding at room temperature against WC ball with a diameter of 3mm. Wear tests were carried out at room temperature (23–25°C) with a humidity of 40 ± 5% under dry-sliding conditions. The contact radius of cycles was set to 5 mm at linear speed of 30 cm/s for the geometry used in this study. In all the experiments, 3N of constant test load was applied at a frequency of 8 Hz and prolonged to 400m wear distance. The friction coefficient of samples was recorded online by the variation in the tangential force. The steady-state CoFs were changed in between 0,29-0,32. The dimensions of the wear tracks (depth and width) were measured as two-dimensional profiles by a stylus profilometer. The wear volumes were calculated by integrating these 2D surface areas over the diameter. Specific wear rates were computed by dividing the wear volume by the applied load and sliding distance. According to the experimental results, the use of porcelain waste in the fabrication of epoxy resin composites can be suggested to be potential materials due to allowing improved mechanical and tribological properties and also providing reduction in production cost.

Keywords: epoxy composites, mechanical properties, porcelain waste, tribological properties

Procedia PDF Downloads 187
367 Chemical Study and Cytotoxic Activity of Extracts from Erythroxylum Genus against HeLa Cells

Authors: Richele P. Severino, Maria M. F. Alchaar, Lorena R. F. De Sousa, Patrik S. Vital, Ana G. Silva, Rosy I. M. A. Ribeiro

Abstract:

Recognized as a global biodiversity hotspot, the Cerrado (Brazil) presents an extreme abundance of endemic species and it is considered to be one of the biologically richest tropical savanna regions in the world. Erythroxylum genus is found in Cerrado and chemically is characterized by the presence of tropane alkaloids, among them cocaine, a natural alkaloid produced by Erythroxylum coca Lam., which was used as a local anesthetic in small surgeries. However, cocaine gained notoriety due to its psychoactive activity in the Central Nervous System (CNS), becoming one of the major problems of public health today. Some species of Erythroxylum are referred to in the literature as having pharmacological potential, which provide alkaloids, terpenoids, and flavonoids. E. vacciniifolium Mart., commonly known as 'catuaba', is used as a central nervous system stimulant and has aphrodisiac properties and E. pelleterianum A. St.-Hil. in the treatment of stomach pains. Already E. myrsinites Mart. and E. suberosum A. St.-Hil. are used in the tannery industry. Species of Erythroxylum are also used in folk medicine for various diseases, against diabetes, antiviral, fungicidal, cytotoxicity, among others. The Cerrado is recognized as the richer savannah in the world in biodiversity but little explored from the chemical view. In our on-going study of the chemistry of Erythroxylum genus, we have investigated four specimens collected in central Cerrado of Brazil: E. campestre (EC), E. deciduum (ED), E. suberosum (ES) and E. tortuosum (ET). The cytotoxic activity of extracts was evaluated using HeLa cells, in vitro assays. The chemical investigation was performed preparing the extracts using n-hexane (H), dichloromethane (D), ethyl acetate (E) and methanol (M). The cells were treated with increasing concentrations of extracts (50, 75 and 100 μg/mL) diluted in DMSO (1%) and DMEM (0.5% FBS and 1% P/S). The IC₅₀ values were determined measured spectrophotometrically at 570 nm, after incubation of HeLa cell line for 48 hours using the MTT (SIGMA M5655), and calculated by nonlinear regression analysis using GraphPad Prism software. All the assays were done in triplicate and repeated at least two times. The cytotoxic assays showed some promising results with IC₅₀ values less than 100 μg/mL (ETD = 38.5 μg/mL; ETM = 92.3 μg/mL; ESM = 67.8 μg/mL; ECD = 24.0 μg/mL; ECM = 32.9; EDA = 44.2 μg/mL). The chemical profile study of ethyl acetate (E) and methanolic (M) extracts of E. tortuosum leaves was performed by LC-MS, and the structures of the compounds were determined by analysis of ¹H, HSQC and HMBC spectra, and confirmed by comparison with the literature data. The investigation led to six substances: α-amyrin, β-amyrin, campesterol, stigmastan-3,5-diene, β-sitosterol and 7,4’-di-O-methylquercetin-3-O-β-rutinoside, with flavonoid the major compound of extracts. By alkaline extraction of the methanolic extract, it was possible to identify three alkaloids: tropacocaine, cocaine and 6-methoxy-8-methyl-8-azabicyclo[3.2.1]octan-3-ol. The results obtained are important for the chemical knowledge of the Cerrado biodiversity and brought a contribution to the chemistry of Erythroxylum genus.

Keywords: cytotoxicity, Erythroxylum, chemical profile, secondary metabolites

Procedia PDF Downloads 129
366 High-Fidelity Materials Screening with a Multi-Fidelity Graph Neural Network and Semi-Supervised Learning

Authors: Akeel A. Shah, Tong Zhang

Abstract:

Computational approaches to learning the properties of materials are commonplace, motivated by the need to screen or design materials for a given application, e.g., semiconductors and energy storage. Experimental approaches can be both time consuming and costly. Unfortunately, computational approaches such as ab-initio electronic structure calculations and classical or ab-initio molecular dynamics are themselves can be too slow for the rapid evaluation of materials, often involving thousands to hundreds of thousands of candidates. Machine learning assisted approaches have been developed to overcome the time limitations of purely physics-based approaches. These approaches, on the other hand, require large volumes of data for training (hundreds of thousands on many standard data sets such as QM7b). This means that they are limited by how quickly such a large data set of physics-based simulations can be established. At high fidelity, such as configuration interaction, composite methods such as G4, and coupled cluster theory, gathering such a large data set can become infeasible, which can compromise the accuracy of the predictions - many applications require high accuracy, for example band structures and energy levels in semiconductor materials and the energetics of charge transfer in energy storage materials. In order to circumvent this problem, multi-fidelity approaches can be adopted, for example the Δ-ML method, which learns a high-fidelity output from a low-fidelity result such as Hartree-Fock or density functional theory (DFT). The general strategy is to learn a map between the low and high fidelity outputs, so that the high-fidelity output is obtained a simple sum of the physics-based low-fidelity and correction, Although this requires a low-fidelity calculation, it typically requires far fewer high-fidelity results to learn the correction map, and furthermore, the low-fidelity result, such as Hartree-Fock or semi-empirical ZINDO, is typically quick to obtain, For high-fidelity outputs the result can be an order of magnitude or more in speed up. In this work, a new multi-fidelity approach is developed, based on a graph convolutional network (GCN) combined with semi-supervised learning. The GCN allows for the material or molecule to be represented as a graph, which is known to improve accuracy, for example SchNet and MEGNET. The graph incorporates information regarding the numbers of, types and properties of atoms; the types of bonds; and bond angles. They key to the accuracy in multi-fidelity methods, however, is the incorporation of low-fidelity output to learn the high-fidelity equivalent, in this case by learning their difference. Semi-supervised learning is employed to allow for different numbers of low and high-fidelity training points, by using an additional GCN-based low-fidelity map to predict high fidelity outputs. It is shown on 4 different data sets that a significant (at least one order of magnitude) increase in accuracy is obtained, using one to two orders of magnitude fewer low and high fidelity training points. One of the data sets is developed in this work, pertaining to 1000 simulations of quinone molecules (up to 24 atoms) at 5 different levels of fidelity, furnishing the energy, dipole moment and HOMO/LUMO.

Keywords: .materials screening, computational materials, machine learning, multi-fidelity, graph convolutional network, semi-supervised learning

Procedia PDF Downloads 12
365 Occipital Squama Convexity and Neurocranial Covariation in Extant Homo sapiens

Authors: Miranda E. Karban

Abstract:

A distinctive pattern of occipital squama convexity, known as the occipital bun or chignon, has traditionally been considered a derived Neandertal trait. However, some early modern and extant Homo sapiens share similar occipital bone morphology, showing pronounced internal and external occipital squama curvature and paralambdoidal flattening. It has been posited that these morphological patterns are homologous in the two groups, but this claim remains disputed. Many developmental hypotheses have been proposed, including assertions that the chignon represents a developmental response to a long and narrow cranial vault, a narrow or flexed basicranium, or a prognathic face. These claims, however, remain to be metrically quantified in a large subadult sample, and little is known about the feature’s developmental, functional, or evolutionary significance. This study assesses patterns of chignon development and covariation in a comparative sample of extant human growth study cephalograms. Cephalograms from a total of 549 European-derived North American subjects (286 male, 263 female) were scored on a 5-stage ranking system of chignon prominence. Occipital squama shape was found to exist along a continuum, with 34 subjects (6.19%) possessing defined chignons, and 54 subjects (9.84%) possessing very little occipital squama convexity. From this larger sample, those subjects represented by a complete radiographic series were selected for metric analysis. Measurements were collected from lateral and posteroanterior (PA) cephalograms of 26 subjects (16 male, 10 female), each represented at 3 longitudinal age groups. Age group 1 (range: 3.0-6.0 years) includes subjects during a period of rapid brain growth. Age group 2 (range: 8.0-9.5 years) includes subjects during a stage in which brain growth has largely ceased, but cranial and facial development continues. Age group 3 (range: 15.9-20.4 years) includes subjects at their adult stage. A total of 16 landmarks and 153 sliding semi-landmarks were digitized at each age point, and geometric morphometric analyses, including relative warps analysis and two-block partial least squares analysis, were conducted to study covariation patterns between midsagittal occipital bone shape and other aspects of craniofacial morphology. A convex occipital squama was found to covary significantly with a low, elongated neurocranial vault, and this pattern was found to exist from the youngest age group. Other tested patterns of covariation, including cranial and basicranial breadth, basicranial angle, midcoronal cranial vault shape, and facial prognathism, were not found to be significant at any age group. These results suggest that the chignon, at least in this sample, should not be considered an independent feature, but rather the result of developmental interactions relating to neurocranial elongation. While more work must be done to quantify chignon morphology in fossil subadults, this study finds no evidence to disprove the developmental homology of the feature in modern humans and Neandertals.

Keywords: chignon, craniofacial covariation, human cranial development, longitudinal growth study, occipital bun

Procedia PDF Downloads 176
364 3D Label-Free Bioimaging of Native Tissue with Selective Plane Illumination Optical Microscopy

Authors: Jing Zhang, Yvonne Reinwald, Nick Poulson, Alicia El Haj, Chung See, Mike Somekh, Melissa Mather

Abstract:

Biomedical imaging of native tissue using light offers the potential to obtain excellent structural and functional information in a non-invasive manner with good temporal resolution. Image contrast can be derived from intrinsic absorption, fluorescence, or scatter, or through the use of extrinsic contrast. A major challenge in applying optical microscopy to in vivo tissue imaging is the effects of light attenuation which limits light penetration depth and achievable imaging resolution. Recently Selective Plane Illumination Microscopy (SPIM) has been used to map the 3D distribution of fluorophores dispersed in biological structures. In this approach, a focused sheet of light is used to illuminate the sample from the side to excite fluorophores within the sample of interest. Images are formed based on detection of fluorescence emission orthogonal to the illumination axis. By scanning the sample along the detection axis and acquiring a stack of images, 3D volumes can be obtained. The combination of rapid image acquisition speeds with the low photon dose to samples optical sectioning provides SPIM is an attractive approach for imaging biological samples in 3D. To date all implementations of SPIM rely on the use of fluorescence reporters be that endogenous or exogenous. This approach has the disadvantage that in the case of exogenous probes the specimens are altered from their native stage rendering them unsuitable for in vivo studies and in general fluorescence emission is weak and transient. Here we present for the first time to our knowledge a label-free implementation of SPIM that has downstream applications in the clinical setting. The experimental set up used in this work incorporates both label-free and fluorescent illumination arms in addition to a high specification camera that can be partitioned for simultaneous imaging of both fluorescent emission and scattered light from intrinsic sources of optical contrast in the sample being studied. This work first involved calibration of the imaging system and validation of the label-free method with well characterised fluorescent microbeads embedded in agarose gel. 3D constructs of mammalian cells cultured in agarose gel with varying cell concentrations were then imaged. A time course study to track cell proliferation in the 3D construct was also carried out and finally a native tissue sample was imaged. For each sample multiple images were obtained by scanning the sample along the axis of detection and 3D maps reconstructed. The results obtained validated label-free SPIM as a viable approach for imaging cells in a 3D gel construct and native tissue. This technique has the potential use in a near-patient environment that can provide results quickly and be implemented in an easy to use manner to provide more information with improved spatial resolution and depth penetration than current approaches.

Keywords: bioimaging, optics, selective plane illumination microscopy, tissue imaging

Procedia PDF Downloads 236
363 Use of Proton Pump Inhibitors Medications during the First Years of Life and Late Complications

Authors: Kamelia Hamza

Abstract:

Background: Proton pump inhibitors (PPIs) are the most prescribed drug classes for pediatric gastroesophageal reflux disease (GERD).Many patients are treated with these drugs for atypical manifestations attributed to gastroesophageal reflux (GER), even in the absence of proved causal relationship. There is an impression of increase use of PPI's treatment for reflux in "clalit health services," the largest health organization in Israel. In the recent years, the medicine is given without restriction, it's not limited to pediatric gastroenterologists only, but pediatricians and family doctors. The objective of this study is to evaluate the hypothesis that exposure to PPIs during the first year of life is associated with an increased risk of developing late adverse diseases: pneumonia, asthma, AGE, IBD, celiac disease, allergic disorders, obesity, attention deficit hyperactivity disorders (ADHD), autism spectrum disorders (ASD). Methods: The study is a retrospective case-control cohort study based on a computerized database of Clalit Health Services (CHS). It includes 9844 children born between 2002-2018 and reported to complain of at least one of the symptoms (reflux/ spitting up, irritability, feeding difficulties, colics). The study population included the study group (n=4922) of children exposed to PPIs at any time prior to the first year of life and a control group (n=4922) child not exposed to PPIs who were matched to each case of the study group on age, race, socioeconomic status, and year of birth. The prevalence of late complications/diseases in the study group was compared with the prevalence of late complications/diseases diagnosis between 2002-2020 in the control group. Odds ratios and 95% confidence intervals were calculated by using logistic regression models. Results: We found that compared to the control group, children exposed to PPIs in the first year of life had an increased risk of developing several late complications/ disorders: pneumonia, asthma, various allergies (urticaria, allergic rhinitis, or allergic conjunctivitis) OR, inhalant allergies, and food allergies. In addition, they showed an increased risk of being diagnosed with ADHD or ASD, but children exposed to PPIs in the first year of life had decrease the risk of obesity by 17% (OR 0.825, 95%CI 0.697-0.976). Conclusions: We found significant associations between the use of PPIs during the first year of life and subsequent development of late complications/diseases such as respiratory diseases, allergy diseases, ADHD, and ASD. More studies are needed to prove causality and determine the mechanism behind the effect of PPIs and the development of late complications.

Keywords: acid suppressing medications, proton pump inhibitors, histamine 2 blocker, late complications, gastroesophageal reflux, gastroesophageal reflux disease, acute gastroenteritis, community acquired pneumonia, asthma, allergic diseases, obesity, inflammatory bowel diseases, ulcerative colitis, crohn disease, attention deficit hyperactivity disorders, autism spectrum disorders

Procedia PDF Downloads 86
362 Radioprotective Efficacy of Costus afer against the Radiation-Induced Hematology and Histopathology Damage in Mice

Authors: Idowu R. Akomolafe, Naven Chetty

Abstract:

Background: The widespread medical application of ionizing radiation has raised public concern about radiation exposure and, thus, associated cancer risk. The production of reactive oxygen species and free radicals as a result of radiation exposure can cause severe damage to deoxyribonucleic acid (DNA) of cells, thus leading to biological effect. Radiotherapy is an excellent modality in the treatment of cancerous cells, comes with a few challenges. A significant challenge is the exposure of healthy cells surrounding the tumour to radiation. The last few decades have witnessed lots of attention shifted to plants, herbs, and natural product as an alternative to synthetic compound for radioprotection. Thus, the study investigated the radioprotective efficacy of Costus afer against whole-body radiation-induced haematological, histopathological disorder in mice. Materials and Method: Fifty-four mice were randomly divided into nine groups. Animals were pretreated with the extract of Costus afer by oral gavage for six days before irradiation. Control: 6 mice received feed and water only; 6 mice received feed, water, and 3Gy; 6 mice received feed, water, and 6Gy; experimental: 6 mice received 250 mg/kg extract; 6 mice received 500 mg/kg extract; 6 mice received 250 mg/kg extract and 3Gy; 6 mice received 500 mg/kg extract and 3Gy; 6 mice received 250 mg/kg extract and 6Gy; 6 mice received 500 mg/kg extract and 6Gy in addition to feeding and water. The irradiation was done at the Radiotherapy and Oncology Department of Grey's Hospital using linear accelerator (LINAC). Thirty-six mice were sacrificed by cervical dislocation 48 hours after irradiation, and blood was collected for haematology tests. Also, the liver and kidney of the sacrificed mice were surgically removed for histopathology tests. The remaining eighteen (18) mice were used for mortality and survival studies. Data were analysed by one-way ANOVA, followed by Tukey's multiple comparison test. Results: Prior administration of Costus afer extract decreased the symptoms of radiation sickness and caused a significant delay in the mortality as demonstrated in the experimental mice. The first mortality was recorded on day-5 post irradiation, and this happened to the group E- that is, mice that received 6Gy but no extract. There was significant protection in the experimental mice, as demonstrated in the blood counts against hematopoietic and gastrointestinal damage when compared with the control. The protection was seen in the increase in blood counts of experimental animals and the number of survivor. The protection offered by Costus afer may be due to its ability to scavenge free radicals and restore gastrointestinal and bone marrow damage produced by radiation. Conclusions: The study has demonstrated that exposure of mice to radiation could cause modifications in the haematological and histopathological parameters of irradiated mice. However, the changes were relieved by the methanol extract of Costus afer, probably through its free radical scavenging and antioxidant properties.

Keywords: costus afer, hematological, mortality, radioprotection, radiotherapy

Procedia PDF Downloads 129
361 The Influence of Thermal Radiation and Chemical Reaction on MHD Micropolar Fluid in The Presence of Heat Generation/Absorption

Authors: Binyam Teferi

Abstract:

Numerical and theoretical analysis of mixed convection flow of magneto- hydrodynamics micropolar fluid with stretching capillary in the presence of thermal radiation, chemical reaction, viscous dissipation, and heat generation/ absorption have been studied. The non-linear partial differential equations of momentum, angular velocity, energy, and concentration are converted into ordinary differential equations using similarity transformations which can be solved numerically. The dimensionless governing equations are solved by using Runge Kutta fourth and fifth order along with the shooting method. The effect of physical parameters viz., micropolar parameter, unsteadiness parameter, thermal buoyancy parameter, concentration buoyancy parameter, Hartmann number, spin gradient viscosity parameter, microinertial density parameter, thermal radiation parameter, Prandtl number, Eckert number, heat generation or absorption parameter, Schmidt number and chemical reaction parameter on flow variables viz., the velocity of the micropolar fluid, microrotation, temperature, and concentration has been analyzed and discussed graphically. MATLAB code is used to analyze numerical and theoretical facts. From the simulation study, it can be concluded that an increment of micropolar parameter, Hartmann number, unsteadiness parameter, thermal and concentration buoyancy parameter results in decrement of velocity flow of micropolar fluid; microrotation of micropolar fluid decreases with an increment of micropolar parameter, unsteadiness parameter, microinertial density parameter, and spin gradient viscosity parameter; temperature profile of micropolar fluid decreases with an increment of thermal radiation parameter, Prandtl number, micropolar parameter, unsteadiness parameter, heat absorption, and viscous dissipation parameter; concentration of micropolar fluid decreases as unsteadiness parameter, Schmidt number and chemical reaction parameter increases. Furthermore, computational values of local skin friction coefficient, local wall coupled coefficient, local Nusselt number, and local Sherwood number for different values of parameters have been investigated. In this paper, the following important results are obtained; An increment of micropolar parameter and Hartmann number results in a decrement of velocity flow of micropolar fluid. Microrotation decreases with an increment of the microinertial density parameter. Temperature decreases with an increasing value of the thermal radiation parameter and viscous dissipation parameter. Concentration decreases as the values of Schmidt number and chemical reaction parameter increases. The coefficient of local skin friction is enhanced with an increase in values of both the unsteadiness parameter and micropolar parameter. Increasing values of unsteadiness parameter and micropolar parameter results in an increment of the local couple stress. An increment of values of unsteadiness parameter and thermal radiation parameter results in an increment of the rate of heat transfer. As the values of Schmidt number and unsteadiness parameter increases, Sherwood number decreases.

Keywords: thermal radiation, chemical reaction, viscous dissipation, heat absorption/ generation, similarity transformation

Procedia PDF Downloads 118