Search results for: artificial potential function
14426 Computerized Analysis of Phonological Structure of 10,400 Brazilian Sign Language Signs
Authors: Wanessa G. Oliveira, Fernando C. Capovilla
Abstract:
Capovilla and Raphael’s Libras Dictionary documents a corpus of 4,200 Brazilian Sign Language (Libras) signs. Duduchi and Capovilla’s software SignTracking permits users to retrieve signs even when ignoring the gloss corresponding to it and to discover the meaning of all 4,200 signs sign simply by clicking on graphic menus of the sign characteristics (phonemes). Duduchi and Capovilla have discovered that the ease with which any given sign can be retrieved is an inverse function of the average popularity of its component phonemes. Thus, signs composed of rare (distinct) phonemes are easier to retrieve than are those composed of common phonemes. SignTracking offers a means of computing the average popularity of the phonemes that make up each one of 4,200 signs. It provides a precise measure of the degree of ease with which signs can be retrieved, and sign meanings can be discovered. Duduchi and Capovilla’s logarithmic model proved valid: The degree with which any given sign can be retrieved is an inverse function of the arithmetic mean of the logarithm of the popularity of each component phoneme. Capovilla, Raphael and Mauricio’s New Libras Dictionary documents a corpus of 10,400 Libras signs. The present analysis revealed Libras DNA structure by mapping the incidence of 501 sign phonemes resulting from the layered distribution of five parameters: 163 handshape phonemes (CherEmes-ManusIculi); 34 finger shape phonemes (DactilEmes-DigitumIculi); 55 hand placement phonemes (ArtrotoToposEmes-ArticulatiLocusIculi); 173 movement dimension phonemes (CinesEmes-MotusIculi) pertaining to direction, frequency, and type; and 76 Facial Expression phonemes (MascarEmes-PersonalIculi).Keywords: Brazilian sign language, lexical retrieval, libras sign, sign phonology
Procedia PDF Downloads 34514425 Investigating the Influence of Activation Functions on Image Classification Accuracy via Deep Convolutional Neural Network
Authors: Gulfam Haider, sana danish
Abstract:
Convolutional Neural Networks (CNNs) have emerged as powerful tools for image classification, and the choice of optimizers profoundly affects their performance. The study of optimizers and their adaptations remains a topic of significant importance in machine learning research. While numerous studies have explored and advocated for various optimizers, the efficacy of these optimization techniques is still subject to scrutiny. This work aims to address the challenges surrounding the effectiveness of optimizers by conducting a comprehensive analysis and evaluation. The primary focus of this investigation lies in examining the performance of different optimizers when employed in conjunction with the popular activation function, Rectified Linear Unit (ReLU). By incorporating ReLU, known for its favorable properties in prior research, the aim is to bolster the effectiveness of the optimizers under scrutiny. Specifically, we evaluate the adjustment of these optimizers with both the original Softmax activation function and the modified ReLU activation function, carefully assessing their impact on overall performance. To achieve this, a series of experiments are conducted using a well-established benchmark dataset for image classification tasks, namely the Canadian Institute for Advanced Research dataset (CIFAR-10). The selected optimizers for investigation encompass a range of prominent algorithms, including Adam, Root Mean Squared Propagation (RMSprop), Adaptive Learning Rate Method (Adadelta), Adaptive Gradient Algorithm (Adagrad), and Stochastic Gradient Descent (SGD). The performance analysis encompasses a comprehensive evaluation of the classification accuracy, convergence speed, and robustness of the CNN models trained with each optimizer. Through rigorous experimentation and meticulous assessment, we discern the strengths and weaknesses of the different optimization techniques, providing valuable insights into their suitability for image classification tasks. By conducting this in-depth study, we contribute to the existing body of knowledge surrounding optimizers in CNNs, shedding light on their performance characteristics for image classification. The findings gleaned from this research serve to guide researchers and practitioners in making informed decisions when selecting optimizers and activation functions, thus advancing the state-of-the-art in the field of image classification with convolutional neural networks.Keywords: deep neural network, optimizers, RMsprop, ReLU, stochastic gradient descent
Procedia PDF Downloads 12514424 Thermodynamic and Magnetic Properties of Heavy Fermion UTE₂ Superconductor
Authors: Habtamu Anagaw Muluneh, Gebregziabher Kahsay, Tamiru Negussie
Abstract:
Theoretical study of the density of state, condensation energy, specific heat, and magnetization in a spin-triplet superconductor are the main goals of this work. Utilizing the retarded double-time temperature-dependent Green's function formalism and building a model Hamiltonian for the system at hand, we were able to derive the expressions for the parameters mentioned above. The phase diagrams are plotted using MATLAB scripts. From the phase diagrams, the density of electrons increases as the excitation energy increases, and the maximum excitation energy is equal to the superconducting gap, but it decreases when the value exceeds the gap and finally becomes the same as the density of the normal state. On the other hand, the condensation energy decreases with the increase in temperature and attains its minimum value at the superconducting transition temperature but increases with the increase in superconducting transition temperature (TC) and finally becomes zero, implying the superconducting energy is equal to the normal state energy. The specific heat increases with the increase in temperature, attaining its maximum value at the TC and then undergoing a jump, showing the presence of a second-order phase transition from the superconducting state to the normal state. Finally, the magnetization of both the itinerant and localized electrons decreases with the increase in temperature and finally becomes zero at TC = 1.6 K and magnetic phase transition temperature T = 2 K, respectively, which results in a magnetic phase transition from a ferromagnetic to a paramagnetic state. Our finding is in good agreement with the previous findings.Keywords: spin triplet superconductivity, Green’s function, condensation energy, density of state, specific heat, magnetization
Procedia PDF Downloads 2114423 An Artificial Intelligence Framework to Forecast Air Quality
Authors: Richard Ren
Abstract:
Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms
Procedia PDF Downloads 12714422 The Importance of Anthropometric Indices for Assessing the Physical Development and Physical Fitness of Young Athletes
Authors: Akbarova Gulnozakhon
Abstract:
Relevance. Physical exercises can prolong the function of the growth zones of long tubular bones, delay the fusion of the epiphyses and diaphyses of bones and, thus, increase the growth of the body. At the same time, intensive strength exercises can accelerate the process of ossification of bone growth zones and slow down their growth in length. The influence of physical exercises on the process of biological maturation is noted. Gymnastics, which requires intense speed and strength loads, delays puberty. On the other hand, it is indicated that the relatively slow puberty of gymnasts is associated with the selection of girls with a special somatotype in this sport. It was found that the later onset of menstruation in female athletes does not have a negative effect on the maturation process and fertility (the ability to procreate). Observations are made about the normalizing influence of sports on the puberty of girls. The purpose of the study. Our goal is to study physical activity of varying intensity on the formation of secondary sexual characteristics and hormonal status of girls in adolescence. Each biological process peculiar to a given organism is not in a stationary state, but fluctuates with a certain frequency. According to the duration, there are, for example, circadian cycles, and infradian cycles, a typical example of which is the menstrual cycle. Materials and methods, results. Violations of menstrual function in athletes were detected by applying a questionnaire survey that contains several paragraphs and sub-paragraphs where passport data, anthropometric indicators, taking into account anthropometric indices, information about the menstrual cycle are indicated. Of 135 female athletes aged 1-3 to 16 years engaged in various sports - gymnasts, menstrual function disorders were noted in 86.7% (primary or secondary amenorrhea, irregular MC), in swimming-in 57.1%. The general condition also changes during the menstrual cycle. In a large percentage of cases, athletes indicate an increase in irritability in the premenstrual (45%) and menstrual (36%) phases. During these phases, girls note an increase in fatigue of 46.5% and 58% (respectively). In girls, secondary sexual characteristics continue to form during puberty and the clearest indicator of the onset of puberty is the age of the onset of the first menstruation - menarche. Conclusions. 1. Physical exercise has a positive effect on all major systems of the body and thus promotes health.2. Along with a beneficial effect on human health, physical exercise, if the requirements of sports are not observed, can be harmful.Keywords: girls health, anthropometric, physical development, reproductive health
Procedia PDF Downloads 10214421 The Potential of Potato and Maize Based Snacks as Fire Accelerants
Authors: E. Duffin, L. Brownlow
Abstract:
Arson is a crime which can provide exceptional problems to forensic specialists. Its destructive nature makes evidence much harder to find, especially when used to cover up another crime. There is a consistent potential threat of arsonists seeking new and easier ways to set fires. Existing research in this field primarily focuses on the use of accelerants such as petrol, with less attention to other more accessible and harder to detect materials. This includes the growing speculation of potato and maize-based snacks being used as fire accelerants. It was hypothesized that all ‘crisp-type’ snacks in foil packaging had the potential to act as accelerants and would burn readily in the various experiments. To test this hypothesis, a series of small lab-based experiments were undertaken, igniting samples of the snacks. Factors such as ingredients, shape, packaging and calorific value were all taken into consideration. The time (in seconds) spent on fire by the individual snacks was recorded. It was found that all of the snacks tested burnt for statistically similar amounts of time with a p-value of 0.0157. This was followed with a large mock real-life scenario using packets of crisps on fire and car seats to investigate as to the possibility of these snacks being verifiable tools to the arsonist. Here, three full packets of crisps were selected based on variations in burning during the lab experiments. They were each lit with a lighter to initiate burning, then placed onto a car seat to be timed and observed with video cameras. In all three cases, the fire was significant and sustained by the 200-second mark. On the basis of this data, it was concluded that potato and maize-based snacks were viable accelerants of fire. They remain an effective method of starting fires whilst being cheap, accessible, non-suspicious and non-detectable. The results produced supported the hypothesis that all ‘crisp-type’ snacks in foil packaging (that had been tested) had the potential to act as accelerants and would burn readily in the various experiments. This study serves to raise awareness and provide a basis for research and prevention of arson regarding maize and potato-based snacks as fire accelerants.Keywords: arson, crisps, fires, food
Procedia PDF Downloads 12114420 A New Approach to Predicting Physical Biometrics from Behavioural Biometrics
Authors: Raid R. O. Al-Nima, S. S. Dlay, W. L. Woo
Abstract:
A relationship between face and signature biometrics is established in this paper. A new approach is developed to predict faces from signatures by using artificial intelligence. A multilayer perceptron (MLP) neural network is used to generate face details from features extracted from signatures, here face is the physical biometric and signatures is the behavioural biometric. The new method establishes a relationship between the two biometrics and regenerates a visible face image from the signature features. Furthermore, the performance efficiencies of our new technique are demonstrated in terms of minimum error rates compared to published work.Keywords: behavioural biometric, face biometric, neural network, physical biometric, signature biometric
Procedia PDF Downloads 47414419 A Generalised Propensity Score Analysis to Investigate the Influence of Agricultural Research Systems on Greenhouse Gas Emissions
Authors: Spada Alessia, Fiore Mariantonietta, Lamonaca Emilia, Contò Francesco
Abstract:
Bioeconomy can give the chance to face new global challenges and can move ahead the transition from a waste economy to an economy based on renewable resources and sustainable consumption. Air pollution is a grave issue in green challenges, mainly caused by anthropogenic factors. The agriculture sector is a great contributor to global greenhouse gases (GHGs) emissions due to lacking efficient management of the resources involved and research policies. In particular, livestock sector contributes to emissions of GHGs, deforestation, and nutrient imbalances. More effective agricultural research systems and technologies are crucial in order to improve farm productivity but also to reduce the GHGs emissions. Using data from FAOSTAT statistics and concern the EU countries; the aim of this research is to evaluate the impact of ASTI R&D (Agricultural Science and Technology Indicators) on GHGs emissions for countries EU in 2015 by generalized propensity score procedures, estimating a dose-response function, also considering a set of covariates. Expected results show the existence of the influence of ASTI R&D on GHGs across EU countries. Implications are crucial: reducing GHGs emissions by means of R&D based policies and correlatively reaching eco-friendly management of required resources by means of green available practices could have a crucial role for fair intra-generational implications.Keywords: agricultural research systems, dose-response function, generalized propensity score, GHG emissions
Procedia PDF Downloads 27814418 Application of MALDI-MS to Differentiate SARS-CoV-2 and Non-SARS-CoV-2 Symptomatic Infections in the Early and Late Phases of the Pandemic
Authors: Dmitriy Babenko, Sergey Yegorov, Ilya Korshukov, Aidana Sultanbekova, Valentina Barkhanskaya, Tatiana Bashirova, Yerzhan Zhunusov, Yevgeniya Li, Viktoriya Parakhina, Svetlana Kolesnichenko, Yeldar Baiken, Aruzhan Pralieva, Zhibek Zhumadilova, Matthew S. Miller, Gonzalo H. Hortelano, Anar Turmuhambetova, Antonella E. Chesca, Irina Kadyrova
Abstract:
Introduction: The rapidly evolving COVID-19 pandemic, along with the re-emergence of pathogens causing acute respiratory infections (ARI), has necessitated the development of novel diagnostic tools to differentiate various causes of ARI. MALDI-MS, due to its wide usage and affordability, has been proposed as a potential instrument for diagnosing SARS-CoV-2 versus non-SARS-CoV-2 ARI. The aim of this study was to investigate the potential of MALDI-MS in conjunction with a machine learning model to accurately distinguish between symptomatic infections caused by SARS-CoV-2 and non-SARS-CoV-2 during both the early and later phases of the pandemic. Furthermore, this study aimed to analyze mass spectrometry (MS) data obtained from nasal swabs of healthy individuals. Methods: We gathered mass spectra from 252 samples, comprising 108 SARS-CoV-2-positive samples obtained in 2020 (Covid 2020), 7 SARS-CoV- 2-positive samples obtained in 2023 (Covid 2023), 71 samples from symptomatic individuals without SARS-CoV-2 (Control non-Covid ARVI), and 66 samples from healthy individuals (Control healthy). All the samples were subjected to RT-PCR testing. For data analysis, we employed the caret R package to train and test seven machine-learning algorithms: C5.0, KNN, NB, RF, SVM-L, SVM-R, and XGBoost. We conducted a training process using a five-fold (outer) nested repeated (five times) ten-fold (inner) cross-validation with a randomized stratified splitting approach. Results: In this study, we utilized the Covid 2020 dataset as a case group and the non-Covid ARVI dataset as a control group to train and test various machine learning (ML) models. Among these models, XGBoost and SVM-R demonstrated the highest performance, with accuracy values of 0.97 [0.93, 0.97] and 0.95 [0.95; 0.97], specificity values of 0.86 [0.71; 0.93] and 0.86 [0.79; 0.87], and sensitivity values of 0.984 [0.984; 1.000] and 1.000 [0.968; 1.000], respectively. When examining the Covid 2023 dataset, the Naive Bayes model achieved the highest classification accuracy of 43%, while XGBoost and SVM-R achieved accuracies of 14%. For the healthy control dataset, the accuracy of the models ranged from 0.27 [0.24; 0.32] for k-nearest neighbors to 0.44 [0.41; 0.45] for the Support Vector Machine with a radial basis function kernel. Conclusion: Therefore, ML models trained on MALDI MS of nasopharyngeal swabs obtained from patients with Covid during the initial phase of the pandemic, as well as symptomatic non-Covid individuals, showed excellent classification performance, which aligns with the results of previous studies. However, when applied to swabs from healthy individuals and a limited sample of patients with Covid in the late phase of the pandemic, ML models exhibited lower classification accuracy.Keywords: SARS-CoV-2, MALDI-TOF MS, ML models, nasopharyngeal swabs, classification
Procedia PDF Downloads 10814417 Production of Biotechnological Chondroitin from Recombinant E, Coli K4 Strains on Renewable Substrates
Authors: Donatella Cimini, Sergio D’ambrosio, Saba Sadiq, Chiara Schiraldi
Abstract:
Chondroitin sulfate (CS), as well as modified CS, and unsulfated chondroitin, are largely applied in research today. CS is a linear glycosaminoglycan normally present in cartilage-rich tissues and bones in the form of proteoglycans decorated with sulfate groups in different positions. CS is used as an effective non-pharmacological alternative for the treatment of osteoarthritis, and other potential applications in the biomedical field are being investigated. Some bacteria, such as E. coli K4, produce a polysaccharide that is a precursor of CS (unsulfated chondroitin). This work focused on the construction of integrative E. coli K4 recombinant strains overexpressing genes (kfoA, kfoF, pgm and galU in different combinations) involved in the biosynthesis of the nucleotide sugars necessary for polysaccharide synthesis. Strain growth and polymer production were evaluated using renewable waste materials as substrates in shake flasks and small-scale batch fermentation processes. Results demonstrated the potential to replace pure sugars with cheaper medium components to establish environmentally sustainable and cost-effective production routes for potential industrial development. In fact, although excellent fermentation results have been described so far by employing strains that naturally produce chondroitin-like polysaccharides on semi-defined media, there is still the need to reduce manufacturing costs by providing a cost-effective biotechnological alternative to currently used animal-based extraction procedures.Keywords: E. coli K4, chondroitin, microbial cell factories, glycosaminoglycans, renewable resources
Procedia PDF Downloads 8114416 First-Principles Study of Xnmg3 (X=P, As, Sb, Bi) Antiperovskite Compounds
Authors: Kadda Amara, Mohammed Elkeurti, Mostefa Zemouli, Yassine Benallou
Abstract:
In this work, we present a study of the structural, elastic and electronic properties of the cubic antiperovskites XNMg3 (X=P, As, Sb and Bi) using the full-potential augmented plane wave plus local orbital (FP-LAPW+lo) within the Generalized Gradient Approximation based on PBEsol, Perdew 2008 functional. We determined the lattice parameters, the bulk modulus B and their pressure derivative B'. In addition, the elastic properties such as elastic constants (C11, C12 and C44), the shear modulus G, the Young modulus E, the Poisson's ratio ν and the B/G ratio are also given. For the band structure, density of states and charge density the exchange and correlation effects were treated by the Tran-Blaha modified Becke-Johnson potential to prevent the shortcoming of the underestimation of the energy gaps in both LDA and GGA approximations. The obtained results are compared to available experimental data and to other theoretical calculations.Keywords: XNMg3 compounds, GGA-PBEsol, TB-mBJ, elastic properties, electronic properties
Procedia PDF Downloads 40914415 Mild Auditory Perception and Cognitive Impairment in mid-Trimester Pregnancy
Authors: Tahamina Begum, Wan Nor Azlen Wan Mohamad, Faruque Reza, Wan Rosilawati Wan Rosli
Abstract:
To assess auditory perception and cognitive function during pregnancy is necessary as the pregnant women need extra effort for attention mainly for their executive function to maintain their quality of life. This study aimed to investigate neural correlates of cognitive and behavioral processing during mid trimester pregnancy. Event-Related Potentials (ERPs) were studied by using 128-sensor net and PAS or COWA (controlled Oral Word Association), WCST (Wisconsin Card Sorting Test), RAVLTIM (Rey Auditory Verbal and Learning Test: immediate or interference recall, delayed recall (RAVLT DR) and total score (RAVLT TS) were tested for neuropsychology assessment. In total 18 subjects were recruited (n= 9 in each group; control and pregnant group). All participants of the pregnant group were within 16-27 (mid trimester) weeks gestation. Age and education matched control healthy subjects were recruited in the control group. Participants were given a standardized test of auditory cognitive function as auditory oddball paradigm during ERP study. In this paradigm, two different auditory stimuli (standard and target stimuli) were used where subjects counted silently only target stimuli with giving attention by ignoring standard stimuli. Mean differences between target and standard stimuli were compared across groups. N100 (auditory sensory ERP component) and P300 (auditory cognitive ERP component) were recorded at T3, T4, T5, T6, Cz and Pz electrode sites. An equal number of electrodes showed non-significantly shorter amplitude of N100 component (except significantly shorter at T3, P= 0.05) and non-significant longer latencies (except significantly longer latency at T5, P= 0.008) of N100 component in pregnant group comparing control. In case of P300 component, maximum electrode sites showed non-significantly higher amplitudes and equal number of sites showed non-significant shorter latencies in pregnant group comparing control. Neuropsychology results revealed the non-significant higher score of PAS, lower score of WCST, lower score of RAVLTIM and RAVLTDR in pregnant group comparing control. The results of N100 component and RAVLT scores concluded that auditory perception is mildly impaired and P300 component proved very mild cognitive dysfunction with good executive functions in second trimester of pregnancy.Keywords: auditory perception, pregnancy, stimuli, trimester
Procedia PDF Downloads 38414414 Prevalence of Occupational Asthma Diagnosed by Specific Challenge Test in 5 Different Working Environments in Thailand
Authors: Sawang Saenghirunvattana, Chao Saenghirunvattana, Maria Christina Gonzales, Wilai Srimuk, Chitchamai Siangpro, Kritsana Sutthisri
Abstract:
Introduction: Thailand is one of the fastest growing countries in Asia. It has emerged from agricultural to industrialized economy. Work places have shifted from farms to factories, offices and streets were employees are exposed to certain chemicals and pollutants causing occupational diseases particularly asthma. Work-related diseases are major concern and many studies have been published to demonstrate certain professions and their exposures that elevate the risk of asthma. Workers who exhibit coughing, wheezing and difficulty of breathing are brought to a health care setting where Pulmonary Function Test (PFT) is performed and based from results, they are then diagnosed of asthma. These patients, known to have occupational asthma eventually get well when removed from the exposure of the environment. Our study, focused on performing PFT or specific challenge test in diagnosing workers of occupational asthma with them executing the test within their workplace, maintaining the environment and their daily exposure to certain levels of chemicals and pollutants. This has provided us with an understanding and reliable diagnosis of occupational asthma. Objective: To identify the prevalence of Thai workers who develop asthma caused by exposure to pollutants and chemicals from their working environment by conducting interview and performing PFT or specific challenge test in their work places. Materials and Methods: This study was performed from January-March 2015 in Bangkok, Thailand. The percentage of abnormal symptoms of 940 workers in 5 different areas (factories of plastic, fertilizer, animal food, office and streets) were collected through a questionnaire. The demographic information, occupational history, and the state of health were determined using a questionnaire and checklists. PFT was executed in their work places and results were measured and evaluated. Results: Pulmonary Function test was performed by 940 participants. The specific challenge test was done in factories of plastic, fertilizer, animal food, office environment and on the streets of Thailand. Of the 100 participants working in the plastic industry, 65% complained of having respiratory symptoms. None of them had an abnormal PFT. From the participants who worked with fertilizers and are exposed to sulfur dioxide, out of 200 participants, 20% complained of having symptoms and 8% had abnormal PFT. The 300 subjects working with animal food reported that 45% complained of respiratory symptoms and 15% had abnormal PFT results. From the office environment where there is indoor pollution, Out of 140 subjects, 7% had symptoms and 4% had abnormal PFT. The 200 workers exposed to traffic pollution, 24% reported respiratory symptoms and 12% had abnormal PFT. Conclusion: We were able to identify and diagnose participants of occupational asthma through their abnormal lung function test done at their work places. The chemical agents and exposures were determined therefore effective management of workers with occupational asthma were advised to avoid further exposure for better chances of recovery. Further studies identifying the risk factors and causative agents of asthma in workplaces should be developed to encourage interventional strategies and programs that will prevent occupation related diseases particularly asthma.Keywords: occupational asthma, pulmonary function test, specific challenge test, Thailand
Procedia PDF Downloads 30414413 Survey of the Relationship between Functional Movement Screening Tests and Anthropometric Dimensions in Healthy People, 2018
Authors: Akram Sadat Jafari Roodbandi, Parisa Kahani, Fatollah Rahimi Bafrani, Ali Dehghan, Nava Seyedi, Vafa Feyzi, Zohreh Forozanfar
Abstract:
Introduction: Movement function is considered as the ability to produce and maintain balance, stability, and movement throughout the movement chain. Having a score of 14 and above on 7 sub-tests in the functional movement screening (FMS) test shows agility and optimal movement performance. On the other hand, the person's body is an important factor in physical fitness and optimal movement performance. The aim of this study was to identify effective anthropometric dimensions in increasing motor function. Methods: This study was a descriptive-analytical and cross-sectional study using simple random sampling. FMS test and 25 anthropometric dimensions and subcutaneous in five body regions measured in 139 healthy students of Bam University of Medical Sciences. Data analysis was performed using SPSS software and univariate tests and linear regressions at a significance level of 0.05. Results: 139 students were enrolled in the study, 51.1% (71 subjects) and the rest were female. The mean and standard deviation of age, weight, height, and arm subcutaneous fat were 21.5 ± 1.45, 12.6 ± 64.3, 168.7 ± 9.8, 15.3 ± 7, respectively. 17 subjects (12.2%) of the participants in the study have a score of less than 14, and the rest were above 14. Using regression analysis, it was found that exercise and arm subcutaneous fat are predictive variables associated with obtaining a high score in the FMS test. Conclusion: Exercise and weight loss are effective factors for increasing the movement performance of individuals, and this factor is independent of the size of other physical dimensions.Keywords: functional movement, screening test, anthropometry, ergonomics
Procedia PDF Downloads 14814412 Applying (1, T) Ordering Policy in a Multi-Vendor-Single-Buyer Inventory System with Lost Sales and Poisson Demand
Authors: Adel Nikfarjam, Hamed Tayebi, Sadoullah Ebrahimnejad
Abstract:
This paper considers a two-echelon inventory system with a number of warehouses and a single retailer. The retailer replenishes its required items from warehouses, and assembles them into a single final product. We assume that each warehouse supplies only one kind of the raw material for the retailer. The demand process of the final product is assumed to be Poissson, and unsatisfied demand of the final product will be lost. The retailer applies one-for-one-period ordering policy which is also known as (1, T) ordering policy. In this policy the retailer orders to each warehouse a fixed quantity of each item at fixed time intervals, which the fixed quantity is equal to the utilization of the item in the final product. Since, this policy eliminates all demand uncertainties at the upstream echelon, the standard lot sizing model can be applied at all warehouses. In this paper, we calculate the total cost function of the inventory system. Then, based on this function, we present a procedure to obtain the optimal time interval between two consecutive order placements from retailer to the warehouses, and the optimal order quantities of warehouses (assuming that there are positive ordering costs at warehouses). Finally, we present some numerical examples, and conduct numerical sensitivity analysis for cost parameters.Keywords: two-echelon supply chain, multi-vendor-single-buyer inventory system, lost sales, Poisson demand, one-for-one-period policy, lot sizing model
Procedia PDF Downloads 31214411 Intelligent Prediction System for Diagnosis of Heart Attack
Authors: Oluwaponmile David Alao
Abstract:
Due to an increase in the death rate as a result of heart attack. There is need to develop a system that can be useful in the diagnosis of the disease at the medical centre. This system will help in preventing misdiagnosis that may occur from the medical practitioner or the physicians. In this research work, heart disease dataset obtained from UCI repository has been used to develop an intelligent prediction diagnosis system. The system is modeled on a feedforwad neural network and trained with back propagation neural network. A recognition rate of 86% is obtained from the testing of the network.Keywords: heart disease, artificial neural network, diagnosis, prediction system
Procedia PDF Downloads 45014410 Electrode Engineering for On-Chip Liquid Driving by Using Electrokinetic Effect
Authors: Reza Hadjiaghaie Vafaie, Aysan Madanpasandi, Behrooz Zare Desari, Seyedmohammad Mousavi
Abstract:
High lamination in microchannel is one of the main challenges in on-chip components like micro total analyzer systems and lab-on-a-chips. Electro-osmotic force is highly effective in chip-scale. This research proposes a microfluidic-based micropump for low ionic strength solutions. Narrow microchannels are designed to generate an efficient electroosmotic flow near the walls. Microelectrodes are embedded in the lateral sides and actuated by low electric potential to generate pumping effect inside the channel. Based on the simulation study, the fluid velocity increases by increasing the electric potential amplitude. We achieve a net flow velocity of 100 µm/s, by applying +/- 2 V to the electrode structures. Our proposed low voltage design is of interest in conventional lab-on-a-chip applications.Keywords: integration, electrokinetic, on-chip, fluid pumping, microfluidic
Procedia PDF Downloads 29514409 A Multi-Objective Decision Making Model for Biodiversity Conservation and Planning: Exploring the Concept of Interdependency
Authors: M. Mohan, J. P. Roise, G. P. Catts
Abstract:
Despite living in an era where conservation zones are de-facto the central element in any sustainable wildlife management strategy, we still find ourselves grappling with several pareto-optimal situations regarding resource allocation and area distribution for the same. In this paper, a multi-objective decision making (MODM) model is presented to answer the question of whether or not we can establish mutual relationships between these contradicting objectives. For our study, we considered a Red-cockaded woodpecker (Picoides borealis) habitat conservation scenario in the coastal plain of North Carolina, USA. Red-cockaded woodpecker (RCW) is a non-migratory territorial bird that excavates cavities in living pine trees for roosting and nesting. The RCW groups nest in an aggregation of cavity trees called ‘cluster’ and for our model we use the number of clusters to be established as a measure of evaluating the size of conservation zone required. The case study is formulated as a linear programming problem and the objective function optimises the Red-cockaded woodpecker clusters, carbon retention rate, biofuel, public safety and Net Present Value (NPV) of the forest. We studied the variation of individual objectives with respect to the amount of area available and plotted a two dimensional dynamic graph after establishing interrelations between the objectives. We further explore the concept of interdependency by integrating the MODM model with GIS, and derive a raster file representing carbon distribution from the existing forest dataset. Model results demonstrate the applicability of interdependency from both linear and spatial perspectives, and suggest that this approach holds immense potential for enhancing environmental investment decision making in future.Keywords: conservation, interdependency, multi-objective decision making, red-cockaded woodpecker
Procedia PDF Downloads 33714408 Comparative Evaluation of EBT3 Film Dosimetry Using Flat Bad Scanner, Densitometer and Spectrophotometer Methods and Its Applications in Radiotherapy
Authors: K. Khaerunnisa, D. Ryangga, S. A. Pawiro
Abstract:
Over the past few decades, film dosimetry has become a tool which is used in various radiotherapy modalities, either for clinical quality assurance (QA) or dose verification. The response of the film to irradiation is usually expressed in optical density (OD) or net optical density (netOD). While the film's response to radiation is not linear, then the use of film as a dosimeter must go through a calibration process. This study aimed to compare the function of the calibration curve of various measurement methods with various densitometer, using a flat bad scanner, point densitometer and spectrophotometer. For every response function, a radichromic film calibration curve is generated from each method by performing accuracy, precision and sensitivity analysis. netOD is obtained by measuring changes in the optical density (OD) of the film before irradiation and after irradiation when using a film scanner if it uses ImageJ to extract the pixel value of the film on the red channel of three channels (RGB), calculate the change in OD before and after irradiation when using a point densitometer, and calculate changes in absorbance before and after irradiation when using a spectrophotometer. the results showed that the three calibration methods gave readings with a netOD precision of doses below 3% for the uncertainty value of 1σ (one sigma). while the sensitivity of all three methods has the same trend in responding to film readings against radiation, it has a different magnitude of sensitivity. while the accuracy of the three methods provides readings below 3% for doses above 100 cGy and 200 cGy, but for doses below 100 cGy found above 3% when using point densitometers and spectrophotometers. when all three methods are used for clinical implementation, the results of the study show accuracy and precision below 2% for the use of scanners and spectrophotometers and above 3% for precision and accuracy when using point densitometers.Keywords: Callibration Methods, Film Dosimetry EBT3, Flat Bad Scanner, Densitomete, Spectrophotometer
Procedia PDF Downloads 13514407 Comparison of Cardiomyogenic Potential of Amniotic Fluid Mesenchymal Stromal Cells Derived from Normal and Isolated Congenital Heart Defective Fetuses
Authors: Manali Jain, Neeta Singh, Raunaq Fatima, Soniya Nityanand, Mandakini Pradhan, Chandra Prakash Chaturvedi
Abstract:
Isolated Congenital Heart Defect (ICHD) is the major cause of neonatal death worldwide among all forms of CHDs. A significant proportion of fetuses with ICHD die in the neonatal period if no treatment is provided. Recently, stem cell therapies have emerged as a potential approach to ameliorate ICHD in children. ICHD is characterized by cardiac structural abnormalities during embryogenesis due to alterations in the cardiomyogenic properties of a pool of cardiac progenitors/ stem cells associated with fetal heart development. The stem cells present in the amniotic fluid (AF) are of fetal origin and may reflect the physiological and pathological changes in the fetus during embryogenesis. Therefore, in the present study, the cardiomyogenic potential of AF-MSCs derived from fetuses with ICHD (ICHD AF-MSCs) has been evaluated and compared with that of AF-MSCs of structurally normal fetuses (normal AF-MSCs). Normal and ICHD AF-MSC were analyzed for the expression of cardiac progenitor markers viz., stage-specific embryonic antigen-1 (SSEA-1), vascular endothelial growth factor 2 (VEGFR-2) and platelet-derived growth factor receptor-alpha (PDGFR-α) by flow cytometry. The immunophenotypic characterization revealed that ICHD AF-MSCs have significantly lower expression of cardiac progenitor markers VEGFR-2 (0.14% ± 0.6 vs.48.80% ± 0.9; p <0.01), SSEA-1 (70.86% ± 2.4 vs. 88.36% ±2.7; p <0.01), and PDGFR-α (3.92% ± 1.8 vs. 47.59% ± 3.09; p <0.01) in comparison to normal AF-MSCs. Upon induction with 5’-azacytidine for 21 days, ICHD AF-MSCs showed a significantly down-regulated expression of cardiac transcription factors such as GATA-4 (0.4 ± 0.1 vs. 6.8 ± 1.2; p<0.01), ISL-1 (2.3± 0.6 vs. 14.3 ± 1.12; p<0.01), NK-x 2-5 (1.1 ± 0.3 vs. 14.1 ±2.8; p<0.01), TBX-5 (0.4 ± 0.07 vs. 4.4 ± 0.3; p<0.001), and TBX-18 (1.3 ± 0.2 vs. 4.19 ± 0.3; p<0.01) when compared with the normal AF-MSCs. Furthermore, immunocytochemical staining revealed that both types of AF-MSCs could differentiate into cardiovascular lineages and express cardiomyogenic, endothelial, and smooth muscle actin markers, viz., cardiac troponin (cTNT), CD31, and alpha-smooth muscle actin (α-SMA). However, normal AF-MSCs showed an enhanced expression of cTNT (p<0.001), CD31 (p<0.01), and α-SMA (p<0.05), compared to ICHD AF-MSCs. Overall, these results suggest that the ICHD-AF-MSCs have a defective cardiomyogenic differentiation potential and that the defects in these stem cells may have a role in the pathogenesis of ICHD.Keywords: amniotic fluid, cardiomyogenic potential, isolated congenital heart defect, mesenchymal stem cells
Procedia PDF Downloads 10214406 Comparing Xbar Charts: Conventional versus Reweighted Robust Estimation Methods for Univariate Data Sets
Authors: Ece Cigdem Mutlu, Burak Alakent
Abstract:
Maintaining the quality of manufactured products at a desired level depends on the stability of process dispersion and location parameters and detection of perturbations in these parameters as promptly as possible. Shewhart control chart is the most widely used technique in statistical process monitoring to monitor the quality of products and control process mean and variability. In the application of Xbar control charts, sample standard deviation and sample mean are known to be the most efficient conventional estimators in determining process dispersion and location parameters, respectively, based on the assumption of independent and normally distributed datasets. On the other hand, there is no guarantee that the real-world data would be normally distributed. In the cases of estimated process parameters from Phase I data clouded with outliers, efficiency of traditional estimators is significantly reduced, and performance of Xbar charts are undesirably low, e.g. occasional outliers in the rational subgroups in Phase I data set may considerably affect the sample mean and standard deviation, resulting a serious delay in detection of inferior products in Phase II. For more efficient application of control charts, it is required to use robust estimators against contaminations, which may exist in Phase I. In the current study, we present a simple approach to construct robust Xbar control charts using average distance to the median, Qn-estimator of scale, M-estimator of scale with logistic psi-function in the estimation of process dispersion parameter, and Harrell-Davis qth quantile estimator, Hodge-Lehmann estimator and M-estimator of location with Huber psi-function and logistic psi-function in the estimation of process location parameter. Phase I efficiency of proposed estimators and Phase II performance of Xbar charts constructed from these estimators are compared with the conventional mean and standard deviation statistics both under normality and against diffuse-localized and symmetric-asymmetric contaminations using 50,000 Monte Carlo simulations on MATLAB. Consequently, it is found that robust estimators yield parameter estimates with higher efficiency against all types of contaminations, and Xbar charts constructed using robust estimators have higher power in detecting disturbances, compared to conventional methods. Additionally, utilizing individuals charts to screen outlier subgroups and employing different combination of dispersion and location estimators on subgroups and individual observations are found to improve the performance of Xbar charts.Keywords: average run length, M-estimators, quality control, robust estimators
Procedia PDF Downloads 19014405 Bioinformatics Approach to Support Genetic Research in Autism in Mali
Authors: M. Kouyate, M. Sangare, S. Samake, S. Keita, H. G. Kim, D. H. Geschwind
Abstract:
Background & Objectives: Human genetic studies can be expensive, even unaffordable, in developing countries, partly due to the sequencing costs. Our aim is to pilot the use of bioinformatics tools to guide scientifically valid, locally relevant, and economically sound autism genetic research in Mali. Methods: The following databases, NCBI, HGMD, and LSDB, were used to identify hot point mutations. Phenotype, transmission pattern, theoretical protein expression in the brain, the impact of the mutation on the 3D structure of the protein) were used to prioritize selected autism genes. We used the protein database, Modeller, and clustal W. Results: We found Mef2c (Gly27Ala/Leu38Gln), Pten (Thr131IIle), Prodh (Leu289Met), Nme1 (Ser120Gly), and Dhcr7 (Pro227Thr/Glu224Lys). These mutations were associated with endonucleases BseRI, NspI, PfrJS2IV, BspGI, BsaBI, and SpoDI, respectively. Gly27Ala/Leu38Gln mutations impacted the 3D structure of the Mef2c protein. Mef2c protein sequences across species showed a high percentage of similarity with a highly conserved MADS domain. Discussion: Mef2c, Pten, Prodh, Nme1, and Dhcr 7 gene mutation frequencies in the Malian population will be very informative. PCR coupled with restriction enzyme digestion can be used to screen the targeted gene mutations. Sanger sequencing will be used for confirmation only. This will cut down considerably the sequencing cost for gene-to-gene mutation screening. The knowledge of the 3D structure and potential impact of the mutations on Mef2c protein informed the protein family and altered function (ex. Leu38Gln). Conclusion & Future Work: Bio-informatics will positively impact autism research in Mali. Our approach can be applied to another neuropsychiatric disorder.Keywords: bioinformatics, endonucleases, autism, Sanger sequencing, point mutations
Procedia PDF Downloads 8314404 Contribution of Home Gardens to Rural Household Income in Raymond Mhlaba Local Municipality, Eastern Cape Province, South Africa
Abstract:
Home garden has proved to be significant to rural inhabitants by providing a wide range of useful products such as fruits, vegetables and medicine. There is need for quantitative information on its benefits and contributions to rural household. The main objective of this study is to investigate contributions of home garden to income of rural households in Raymond Mhlaba Local Municipality, formerly Nkonkobe Local Municipality of Eastern Cape Province South Africa. The stratified random sampling method was applied in order to choose a sample of 160 households.The study was conducted among 80 households engaging in home gardens and 80 non- participating households in the study area. Data analysis employed descriptive statistics with the use of frequency table and one way sample T test to show actual contributions. The overall model shows that social grant has the highest contribution to total household income for both categories while income generated from home garden has the second largest share to total household income, this shows that the majority of rural households in the study area rely on social grant as their source of income. However, since most households are net food buyers, it is essential to have policies that are formulated with an understanding that household food security is not only a function of the food that farming households produce for their own consumption but more so a function of total household income. The results produced sufficient evidence that home gardens contribute significantly to income of rural household.Keywords: food security, home gardening, household, income
Procedia PDF Downloads 22514403 Business Education and Passion: The Place of Amore, Consciousness, Discipline, and Commitment as Holonomic Constructs in Pedagogy, A Conceptual Exploration
Authors: Jennifer K. Bowerman, Rhonda L. Reich
Abstract:
The purpose of this paper is to explore the concepts ACDC (Amore, Consciousness, Discipline, and Commitment) which the authors first discovered as a philosophy and framework for recruitment and organizational development in a successful start-up tech company in Brazil. This paper represents an exploration of these concepts as a potential pedagogical foundation for undergraduate business education in the classroom. It explores whether their application has potential to build emotional and practical resilience in the face of constant organizational and societal change. Derived from Holonomy this paper explains the concepts and develops a narrative around how change influences the operation of organizations. Using examples from leading edge organizational theorists, it explains why a different educational approach grounded in ACDC concepts may not only have relevance for the working world, but also for undergraduates about to enter that world. The authors propose that in the global context of constant change, it makes sense to develop an approach to education, particularly business education, beyond cognitive knowledge, models and tools, in such a way that emotional and practical resilience and creative thinking may be developed. Using the classroom as an opportunity to explore these concepts, and aligning personal passion with the necessary discipline and commitment, may provide students with a greater sense of their own worth and potential as they venture into their ever-changing futures.Keywords: ACDC, holonomic thinking, organizational learning, organizational change, business pedagogy
Procedia PDF Downloads 23914402 Ellagic Acid Enhanced Apoptotic Radiosensitivity via G1 Cell Cycle Arrest and γ-H2AX Foci Formation in HeLa Cells in vitro
Authors: V. R. Ahire, A. Kumar, B. N. Pandey, K. P. Mishra, G. R. Kulkarni
Abstract:
Radiation therapy is an effective vital strategy used globally in the treatment of cervical cancer. However, radiation efficacy principally depends on the radiosensitivity of the tumor, and not all patient exhibit significant response to irradiation. A radiosensitive tumor is easier to cure than a radioresistant tumor which later advances to local recurrence and metastasis. Herbal polyphenols are gaining attention for exhibiting radiosensitization through various signaling. Current work focuses to study the radiosensitization effect of ellagic acid (EA), on HeLa cells. EA intermediated radiosensitization of HeLa cells was due to the induction γ-H2AX foci formation, G1 phase cell cycle arrest, and loss of reproductive potential, growth inhibition, drop in the mitochondrial membrane potential and protein expression studies that eventually induced apoptosis. Irradiation of HeLa in presence of EA (10 μM) to doses of 2 and 4 Gy γ-radiation produced marked tumor cytotoxicity. EA also demonstrated radio-protective effect on normal cell, NIH3T3 and aided recovery from the radiation damage. Our results advocate EA to be an effective adjuvant for improving cancer radiotherapy as it displays striking tumor cytotoxicity and reduced normal cell damage instigated by irradiation.Keywords: apoptotic radiosensitivity, ellagic acid, mitochondrial potential, cell-cycle arrest
Procedia PDF Downloads 35414401 The Impact of Artificial Intelligence on Human Rights Development
Authors: Kerols Seif Said Botros
Abstract:
The relationship between development and human rights has been debated for a long time. Various principles, from the right to development to development-based human rights, are applied to understand the dynamics between these two concepts. Despite the measures calculated, the connection between enhancement and human rights remains vague. Despite, the connection between these two opinions and the need to strengthen human rights have increased in recent years. It will then be examined whether the right to sustainable development is acceptable or not. In various human rights instruments and this is a good vibe to the request cited above. The book then cites domestic and international human rights treaties, as well as jurisprudence and regulations defining human rights institutions, to support this view.Keywords: sustainable development, human rights, the right to development, the human rights-based approach to development, environmental rights, economic development, social sustainability human rights protection, human rights violations, workers’ rights, justice, security.
Procedia PDF Downloads 5414400 3D-Printed Collagen/Chitosan Scaffolds Loaded with Exosomes Derived from Neural Stem Cells Pretreated with Insulin Growth Factor-1 for Neural Regeneration after Traumatic Brain Injury
Authors: Xiao-Yin Liu, Liang-Xue Zhou
Abstract:
Traumatic brain injury (TBI), as a kind of nerve trauma caused by an external force, affects people all over the world and is a global public health problem. Although there are various clinical treatments for brain injury, including surgery, drug therapy, and rehabilitation therapy, the therapeutic effect is very limited. To improve the therapeutic effect of TBI, scaffolds combined with exosomes are a promising but challenging method for TBI repair. In this study, we examined whether a novel 3D-printed collagen/chitosan scaffold/exosomes derived from neural stem cells (NSCs) pretreated with insulin growth factor-1 (IGF-I) scaffolds (3D-CC-INExos) could be used to improve TBI repair and functional recovery after TBI. Our results showed that composite scaffolds of collagen-, chitosan- and exosomes derived from NSCs pretreated with IGF-I (INExos) could continuously release the exosomes for two weeks. In the rat TBI model, 3D-CC-INExos scaffold transplantation significantly improved motor and cognitive function after TBI, as assessed by the Morris water maze test and modified neurological severity scores. In addition, immunofluorescence staining and transmission electron microscopy showed that the recovery of damaged nerve tissue in the injured area was significantly improved by 3D-CC-INExos implantation. In conclusion, our data suggest that 3D-CC-INExos might provide a potential strategy for the treatment of TBI and lay a solid foundation for clinical translation.Keywords: traumatic brain injury, exosomes, insulin growth factor-1, neural stem cells, collagen, chitosan, 3D printing, neural regeneration, angiogenesis, functional recovery
Procedia PDF Downloads 8014399 EEG and DC-Potential Level Сhanges in the Elderly
Authors: Irina Deputat, Anatoly Gribanov, Yuliya Dzhos, Alexandra Nekhoroshkova, Tatyana Yemelianova, Irina Bolshevidtseva, Irina Deryabina, Yana Kereush, Larisa Startseva, Tatyana Bagretsova, Irina Ikonnikova
Abstract:
In the modern world the number of elderly people increases. Preservation of functionality of an organism in the elderly becomes very important now. During aging the higher cortical functions such as feelings, perception, attention, memory, and ideation are gradual decrease. It is expressed in the rate of information processing reduction, volume of random access memory loss, ability to training and storing of new information decrease. Perspective directions in studying of aging neurophysiological parameters are brain imaging: computer electroencephalography, neuroenergy mapping of a brain, and also methods of studying of a neurodynamic brain processes. Research aim – to study features of a brain aging in elderly people by electroencephalogram (EEG) and the DC-potential level. We examined 130 people aged 55 - 74 years that did not have psychiatric disorders and chronic states in a decompensation stage. EEG was recorded with a 128-channel GES-300 system (USA). EEG recordings are collected while the participant sits at rest with their eyes closed for 3 minutes. For a quantitative assessment of EEG we used the spectral analysis. The range was analyzed on delta (0,5–3,5 Hz), a theta - (3,5–7,0 Hz), an alpha 1-(7,0–11,0 Hz) an alpha 2-(11–13,0 Hz), beta1-(13–16,5 Hz) and beta2-(16,5–20 Hz) ranges. In each frequency range spectral power was estimated. The 12-channel hardware-software diagnostic ‘Neuroenergometr-KM’ complex was applied for registration, processing and the analysis of a brain constant potentials level. The DC-potential level registered in monopolar leads. It is revealed that the EEG of elderly people differ in higher rates of spectral power in the range delta (р < 0,01) and a theta - (р < 0,05) rhythms, especially in frontal areas in aging. By results of the comparative analysis it is noted that elderly people 60-64 aged differ in higher values of spectral power alfa-2 range in the left frontal and central areas (р < 0,05) and also higher values beta-1 range in frontal and parieto-occipital areas (р < 0,05). Study of a brain constant potential level distribution revealed increase of total energy consumption on the main areas of a brain. In frontal leads we registered the lowest values of constant potential level. Perhaps it indicates decrease in an energy metabolism in this area and difficulties of executive functions. The comparative analysis of a potential difference on the main assignments testifies to unevenness of a lateralization of a brain functions at elderly people. The results of a potential difference between right and left hemispheres testify to prevalence of the left hemisphere activity. Thus, higher rates of functional activity of a cerebral cortex are peculiar to people of early advanced age (60-64 years) that points to higher reserve opportunities of central nervous system. By 70 years there are age changes of a cerebral power exchange and level of electrogenesis of a brain which reflect deterioration of a condition of homeostatic mechanisms of self-control and the program of processing of the perceptual data current flow.Keywords: brain, DC-potential level, EEG, elderly people
Procedia PDF Downloads 48514398 Strategies to Promote Safety and Reduce the Vulnerability of Urban Worn-out Textures to the Potential Risk of Earthquake
Authors: Bahareh Montakhabi
Abstract:
Earthquake is known as one of the deadliest natural disasters, with a high potential for damage to life and property. Some of Iran's cities were completely destroyed after major earthquakes, and the people of the region suffered a lot of mental, financial and psychological damage. Tehran is one of the cities located on the fault line. According to experts, the only city that could be severely damaged by a moderate earthquake in Earthquake Engineering Intensity Scale (EEIS) (70% destruction) is Tehran because Tehran is built precisely on the fault. Seismic risk assessment (SRA) of cities in the scale of urban areas and neighborhoods is the first phase of the earthquake crisis management process, which can provide the information required to make optimal use of available resources and facilities in order to reduce the destructive effects and consequences of an earthquake. This study has investigated strategies to promote safety and reduce the vulnerability of worn-out urban textures in the District 12 of Tehran to the potential risk of earthquake aimed at prioritizing the factors affecting the vulnerability of worn-out urban textures to earthquake crises and how to reduce them, using the analytical-exploratory method, analytical hierarchy process (AHP), Expert choice and SWOT technique. The results of SWAT and AHP analysis of the vulnerability of the worn-out textures of District 12 to internal threats (1.70) and external threats (2.40) indicate weak safety of the textures of District 12 regarding internal and external factors and a high possibility of damage.Keywords: risk management, vulnerability, worn-out textures, earthquake
Procedia PDF Downloads 19314397 Role of Sequestration of CO2 Due to the Carbonation in Total CO2 Emission Balance in Concrete Life
Authors: P. P. Woyciechowski
Abstract:
Calculation of the carbon footprint of cement concrete is a complex process including consideration of the phase of primary life (components and concrete production processes, transportation, construction works, maintenance of concrete structures) and secondary life, including demolition and recycling. Taking into consideration the effect of concrete carbonation can lead to a reduction in the calculated carbon footprint of concrete. In this paper, an example of CO2 balance for small bridge elements made of Portland cement reinforced concrete was done. The results include the effect of carbonation of concrete in a structure and of concrete rubble after demolition. It was shown that important impact of carbonation on the balance is possible only when rubble carbonation is possible. It was related to the fact that only the sequestration potential in the secondary phase of concrete life has significant value.Keywords: carbon footprint, balance of carbon dioxide in nature, concrete carbonation, the sequestration potential of concrete
Procedia PDF Downloads 229