Search results for: di nitrosyl iron complex
389 Unveiling Drought Dynamics in the Cuneo District, Italy: A Machine Learning-Enhanced Hydrological Modelling Approach
Authors: Mohammadamin Hashemi, Mohammadreza Kashizadeh
Abstract:
Droughts pose a significant threat to sustainable water resource management, agriculture, and socioeconomic sectors, particularly in the field of climate change. This study investigates drought simulation using rainfall-runoff modelling in the Cuneo district, Italy, over the past 60-year period. The study leverages the TUW model, a lumped conceptual rainfall-runoff model with a semi-distributed operation capability. Similar in structure to the widely used Hydrologiska Byråns Vattenbalansavdelning (HBV) model, the TUW model operates on daily timesteps for input and output data specific to each catchment. It incorporates essential routines for snow accumulation and melting, soil moisture storage, and streamflow generation. Multiple catchments' discharge data within the Cuneo district form the basis for thorough model calibration employing the Kling-Gupta Efficiency (KGE) metric. A crucial metric for reliable drought analysis is one that can accurately represent low-flow events during drought periods. This ensures that the model provides a realistic picture of water availability during these critical times. Subsequent validation of monthly discharge simulations thoroughly evaluates overall model performance. Beyond model development, the investigation delves into drought analysis using the robust Standardized Runoff Index (SRI). This index allows for precise characterization of drought occurrences within the study area. A meticulous comparison of observed and simulated discharge data is conducted, with particular focus on low-flow events that characterize droughts. Additionally, the study explores the complex interplay between land characteristics (e.g., soil type, vegetation cover) and climate variables (e.g., precipitation, temperature) that influence the severity and duration of hydrological droughts. The study's findings demonstrate successful calibration of the TUW model across most catchments, achieving commendable model efficiency. Comparative analysis between simulated and observed discharge data reveals significant agreement, especially during critical low-flow periods. This agreement is further supported by the Pareto coefficient, a statistical measure of goodness-of-fit. The drought analysis provides critical insights into the duration, intensity, and severity of drought events within the Cuneo district. This newfound understanding of spatial and temporal drought dynamics offers valuable information for water resource management strategies and drought mitigation efforts. This research deepens our understanding of drought dynamics in the Cuneo region. Future research directions include refining hydrological modelling techniques and exploring future drought projections under various climate change scenarios.Keywords: hydrologic extremes, hydrological drought, hydrological modelling, machine learning, rainfall-runoff modelling
Procedia PDF Downloads 41388 Comparative Analysis of Smart City Development: Assessing the Resilience and Technological Advancement in Singapore and Bucharest
Authors: Sînziana Iancu
Abstract:
In an era marked by rapid urbanization and technological advancement, the concept of smart cities has emerged as a pivotal solution to address the complex challenges faced by urban centres. As cities strive to enhance the quality of life for their residents, the development of smart cities has gained prominence. This study embarks on a comparative analysis of two distinct smart city models, Singapore and Bucharest, to assess their resilience and technological advancements. The significance of this study lies in its potential to provide valuable insights into the strategies, strengths, and areas of improvement in smart city development, ultimately contributing to the advancement of urban planning and sustainability. Methodologies: This comparative study employs a multifaceted approach to comprehensively analyse the smart city development in Singapore and Bucharest: * Comparative Analysis: A systematic comparison of the two cities is conducted, focusing on key smart city indicators, including digital infrastructure, integrated public services, urban planning and sustainability, transportation and mobility, environmental monitoring, safety and security, innovation and economic resilience, and community engagement; * Case Studies: In-depth case studies are conducted to delve into specific smart city projects and initiatives in both cities, providing real-world examples of their successes and challenges; * Data Analysis: Official reports, statistical data, and relevant publications are analysed to gather quantitative insights into various aspects of smart city development. Major Findings: Through a comprehensive analysis of Singapore and Bucharest's smart city development, the study yields the following major findings: * Singapore excels in digital infrastructure, integrated public services, safety, and innovation, showcasing a high level of resilience across these domains; * Bucharest is in the early stages of smart city development, with notable potential for growth in digital infrastructure and community engagement.; * Both cities exhibit a commitment to sustainable urban planning and environmental monitoring, with room for improvement in integrating these aspects into everyday life; * Transportation and mobility solutions are a priority for both cities, with Singapore having a more advanced system, while Bucharest is actively working on improving its transportation infrastructure; * Community engagement, while important, requires further attention in both cities to enhance the inclusivity of smart city initiatives. Conclusion: In conclusion, this study serves as a valuable resource for urban planners, policymakers, and stakeholders in understanding the nuances of smart city development and resilience. While Singapore stands as a beacon of success in various smart city indicators, Bucharest demonstrates potential and a willingness to adapt and grow in this domain. As cities worldwide embark on their smart city journeys, the lessons learned from Singapore and Bucharest provide invaluable insights into the path toward urban sustainability and resilience in the digital age.Keywords: bucharest, resilience, Singapore, smart city
Procedia PDF Downloads 69387 Nurturing Scientific Minds: Enhancing Scientific Thinking in Children (Ages 5-9) through Experiential Learning in Kids Science Labs (STEM)
Authors: Aliya K. Salahova
Abstract:
Scientific thinking, characterized by purposeful knowledge-seeking and the harmonization of theory and facts, holds a crucial role in preparing young minds for an increasingly complex and technologically advanced world. This abstract presents a research study aimed at fostering scientific thinking in early childhood, focusing on children aged 5 to 9 years, through experiential learning in Kids Science Labs (STEM). The study utilized a longitudinal exploration design, spanning 240 weeks from September 2018 to April 2023, to evaluate the effectiveness of the Kids Science Labs program in developing scientific thinking skills. Participants in the research comprised 72 children drawn from local schools and community organizations. Through a formative psychology-pedagogical experiment, the experimental group engaged in weekly STEM activities carefully designed to stimulate scientific thinking, while the control group participated in daily art classes for comparison. To assess the scientific thinking abilities of the participants, a registration table with evaluation criteria was developed. This table included indicators such as depth of questioning, resource utilization in research, logical reasoning in hypotheses, procedural accuracy in experiments, and reflection on research processes. The data analysis revealed dynamic fluctuations in the number of children at different levels of scientific thinking proficiency. While the development was not uniform across all participants, a main leading factor emerged, indicating that the Kids Science Labs program and formative experiment exerted a positive impact on enhancing scientific thinking skills in children within this age range. The study's findings support the hypothesis that systematic implementation of STEM activities effectively promotes and nurtures scientific thinking in children aged 5-9 years. Enriching education with a specially planned STEM program, tailoring scientific activities to children's psychological development, and implementing well-planned diagnostic and corrective measures emerged as essential pedagogical conditions for enhancing scientific thinking abilities in this age group. The results highlight the significant and positive impact of the systematic-activity approach in developing scientific thinking, leading to notable progress and growth in children's scientific thinking abilities over time. These findings have promising implications for educators and researchers, emphasizing the importance of incorporating STEM activities into educational curricula to foster scientific thinking from an early age. This study contributes valuable insights to the field of science education and underscores the potential of STEM-based interventions in shaping the future scientific minds of young children.Keywords: Scientific thinking, education, STEM, intervention, Psychology, Pedagogy, collaborative learning, longitudinal study
Procedia PDF Downloads 61386 The Concept of Path in Original Buddhism and the Concept of Psychotherapeutic Improvement
Authors: Beth Jacobs
Abstract:
The landmark movement of Western clinical psychology in the 20th century was the development of psychotherapy. The landmark movement of clinical psychology in the 21st century will be the absorption of meditation practices from Buddhist psychology. While millions of people explore meditation and related philosophy, very few people are exposed to the materials of original Buddhism on this topic, especially to the Theravadan Abhidharma. The Abhidharma is an intricate system of lists and matrixes that were used to understand and remember Buddha’s teaching. The Abhidharma delineates the first psychological system of Buddhism, how the mind works in the universe of reality and why meditation training strengthens and purifies the experience of life. Its lists outline the psychology of mental constructions, perception, emotion and cosmological causation. While the Abhidharma is technical, elaborate and complex, its essential purpose relates to the central purpose of clinical psychology: to relieve human suffering. Like Western depth psychology, the methodology rests on understanding underlying processes of consciousness and perception. What clinical psychologists might describe as therapeutic improvement, the Abhidharma delineates as a specific pathway of purified actions of consciousness. This paper discusses the concept of 'path' as presented in aspects of the Theravadan Abhidharma and relates this to current clinical psychological views of therapy outcomes and gains. The core path in Buddhism is the Eight-Fold Path, which is the fourth noble truth and the launching of activity toward liberation. The path is not composed of eight ordinal steps; it’s eight-fold and is described as opening the way, not funneling choices. The specific path in the Abhidharma is described in many steps of development of consciousness activities. The path is not something a human moves on, but something that moments of consciousness develop within. 'Cittas' are extensively described in the Abhidharma as the atomic-level unit of a raw action of consciousness touching upon an object in a field, and there are 121 types of cittas categorized. The cittas are embedded in the mental factors, which could be described as the psychological packaging elements of our experiences of consciousness. Based on these constellations of infinitesimal, linked occurrences of consciousness, citta are categorized by dimensions of purification. A path is a chain of citta developing through causes and conditions. There are no selves, no pronouns in the Abhidharma. Instead of me walking a path, this is about a person working with conditions to cultivate a stream of consciousness that is pure, immediate, direct and generous. The same effort, in very different terms, informs the work of most psychotherapies. Depth psychology seeks to release the bound, unconscious elements of mental process into the clarity of realization. Cognitive and behavioral psychologies work on breaking down automatic thought valuations and actions, changing schemas and interpersonal dynamics. Understanding how the original Buddhist concept of positive human development relates to the clinical psychological concept of therapy weaves together two brilliant systems of thought on the development of human well being.Keywords: Abhidharma, Buddhist path, clinical psychology, psychotherapeutic outcome
Procedia PDF Downloads 213385 Reading and Writing Memories in Artificial and Human Reasoning
Authors: Ian O'Loughlin
Abstract:
Memory networks aim to integrate some of the recent successes in machine learning with a dynamic memory base that can be updated and deployed in artificial reasoning tasks. These models involve training networks to identify, update, and operate over stored elements in a large memory array in order, for example, to ably perform question and answer tasks parsing real-world and simulated discourses. This family of approaches still faces numerous challenges: the performance of these network models in simulated domains remains considerably better than in open, real-world domains, wide-context cues remain elusive in parsing words and sentences, and even moderately complex sentence structures remain problematic. This innovation, employing an array of stored and updatable ‘memory’ elements over which the system operates as it parses text input and develops responses to questions, is a compelling one for at least two reasons: first, it addresses one of the difficulties that standard machine learning techniques face, by providing a way to store a large bank of facts, offering a way forward for the kinds of long-term reasoning that, for example, recurrent neural networks trained on a corpus have difficulty performing. Second, the addition of a stored long-term memory component in artificial reasoning seems psychologically plausible; human reasoning appears replete with invocations of long-term memory, and the stored but dynamic elements in the arrays of memory networks are deeply reminiscent of the way that human memory is readily and often characterized. However, this apparent psychological plausibility is belied by a recent turn in the study of human memory in cognitive science. In recent years, the very notion that there is a stored element which enables remembering, however dynamic or reconstructive it may be, has come under deep suspicion. In the wake of constructive memory studies, amnesia and impairment studies, and studies of implicit memory—as well as following considerations from the cognitive neuroscience of memory and conceptual analyses from the philosophy of mind and cognitive science—researchers are now rejecting storage and retrieval, even in principle, and instead seeking and developing models of human memory wherein plasticity and dynamics are the rule rather than the exception. In these models, storage is entirely avoided by modeling memory using a recurrent neural network designed to fit a preconceived energy function that attains zero values only for desired memory patterns, so that these patterns are the sole stable equilibrium points in the attractor network. So although the array of long-term memory elements in memory networks seem psychologically appropriate for reasoning systems, they may actually be incurring difficulties that are theoretically analogous to those that older, storage-based models of human memory have demonstrated. The kind of emergent stability found in the attractor network models more closely fits our best understanding of human long-term memory than do the memory network arrays, despite appearances to the contrary.Keywords: artificial reasoning, human memory, machine learning, neural networks
Procedia PDF Downloads 271384 Integrations of the Instructional System Design for Students Learning Achievement Motives and Science Attitudes with Stem Educational Model on Stoichiometry Issue in Chemistry Classes with Different Genders
Authors: Tiptunya Duangsri, Panwilai Chomchid, Natchanok Jansawang
Abstract:
This research study was to investigate of education decisions must be made which a part of it should be passed on to future generations as obligatory for all members of a chemistry class for students who will prepare themselves for a special position. The descriptions of instructional design were provided and the recent criticisms are discussed. This research study to an outline of an integrative framework for the description of information and the instructional design model give structure to negotiate a semblance of conscious understanding. The aims of this study are to describe the instructional design model for comparisons between students’ genders of their effects on STEM educational learning achievement motives to their science attitudes and logical thinking abilities with a sample size of 18 students at the 11th grade level with the cluster random sampling technique in Mahawichanukul School were designed. The chemistry learning environment was administered with the STEM education method. To build up the 5-instrument lesson instructional plan issues were instructed innovations, the 30-item Logical Thinking Test (LTT) on 5 scales, namely; Inference, Recognition of Assumptions, Deduction, Interpretation and Evaluation scales was used. Students’ responses of their perceptions with the Test Of Chemistry-Related Attitude (TOCRA) were assessed of their attitude in science toward chemistry. The validity from Index Objective Congruence value (IOC) checked by five expert specialist educator in two chemistry classroom targets in STEM education, the E1/E2 process were equaled evidence of 84.05/81.42 which results based on criteria are higher than of 80/80 standard level with the IOC from the expert educators. Comparisons between students’ learning achievement motives with STEM educational model on stoichiometry issue in chemistry classes with different genders were differentiated at evidence level of .05, significantly. Associations between students’ learning achievement motives on their posttest outcomes and logical thinking abilities, the predictive efficiency (R2) values indicate that 69% and 70% of the variances in different male and female student groups of their logical thinking abilities. The predictive efficiency (R2) values indicate that 73%; and 74% of the variances in different male and female student groups of their science attitudes toward chemistry were associated. Statistically significant on students’ perceptions of their chemistry learning classroom environment and their science attitude toward chemistry when using the MCI and TOCRA, the predictive efficiency (R2) values indicated that 72% and 74% of the variances in different male and female student groups of their chemistry classroom climate, consequently. Suggestions that supporting chemistry or science teachers from science, technology, engineering and mathematics (STEM) in addressing complex teaching and learning issues related instructional design to develop, teach, and assess traditional are important strategies with a focus on STEM education instructional method.Keywords: development, the instructional design model, students learning achievement motives, science attitudes with STEM educational model, stoichiometry issue, chemistry classes, genders
Procedia PDF Downloads 274383 Correlation Analysis between Sensory Processing Sensitivity (SPS), Meares-Irlen Syndrome (MIS) and Dyslexia
Authors: Kaaryn M. Cater
Abstract:
Students with sensory processing sensitivity (SPS), Meares-Irlen Syndrome (MIS) and dyslexia can become overwhelmed and struggle to thrive in traditional tertiary learning environments. An estimated 50% of tertiary students who disclose learning related issues are dyslexic. This study explores the relationship between SPS, MIS and dyslexia. Baseline measures will be analysed to establish any correlation between these three minority methods of information processing. SPS is an innate sensitivity trait found in 15-20% of the population and has been identified in over 100 species of animals. Humans with SPS are referred to as Highly Sensitive People (HSP) and the measure of HSP is a 27 point self-test known as the Highly Sensitive Person Scale (HSPS). A 2016 study conducted by the author established base-line data for HSP students in a tertiary institution in New Zealand. The results of the study showed that all participating HSP students believed the knowledge of SPS to be life-changing and useful in managing life and study, in addition, they believed that all tutors and in-coming students should be given information on SPS. MIS is a visual processing and perception disorder that is found in approximately 10% of the population and has a variety of symptoms including visual fatigue, headaches and nausea. One way to ease some of these symptoms is through the use of colored lenses or overlays. Dyslexia is a complex phonological based information processing variation present in approximately 10% of the population. An estimated 50% of dyslexics are thought to have MIS. The study exploring possible correlations between these minority forms of information processing is due to begin in February 2017. An invitation will be extended to all first year students enrolled in degree programmes across all faculties and schools within the institution. An estimated 900 students will be eligible to participate in the study. Participants will be asked to complete a battery of on-line questionnaires including the Highly Sensitive Person Scale, the International Dyslexia Association adult self-assessment and the adapted Irlen indicator. All three scales have been used extensively in literature and have been validated among many populations. All participants whose score on any (or some) of the three questionnaires suggest a minority method of information processing will receive an invitation to meet with a learning advisor, and given access to counselling services if they choose. Meeting with a learning advisor is not mandatory, and some participants may choose not to receive help. Data will be collected using the Question Pro platform and base-line data will be analysed using correlation and regression analysis to identify relationships and predictors between SPS, MIS and dyslexia. This study forms part of a larger three year longitudinal study and participants will be required to complete questionnaires at annual intervals in subsequent years of the study until completion of (or withdrawal from) their degree. At these data collection points, participants will be questioned on any additional support received relating to their minority method(s) of information processing. Data from this study will be available by April 2017.Keywords: dyslexia, highly sensitive person (HSP), Meares-Irlen Syndrome (MIS), minority forms of information processing, sensory processing sensitivity (SPS)
Procedia PDF Downloads 244382 Polar Bears in Antarctica: An Analysis of Treaty Barriers
Authors: Madison Hall
Abstract:
The Assisted Colonization of Polar Bears to Antarctica requires a careful analysis of treaties to understand existing legal barriers to Ursus maritimus transport and movement. An absence of land-based migration routes prevent polar bears from accessing southern polar regions on their own. This lack of access is compounded by current treaties which limit human intervention and assistance to ford these physical and legal barriers. In a time of massive planetary extinctions, Assisted Colonization posits that certain endangered species may be prime candidates for relocation to hospitable environments to which they have never previously had access. By analyzing existing treaties, this paper will examine how polar bears are limited in movement by humankind’s legal barriers. International treaties may be considered codified reflections of anthropocentric values of the best knowledge and understanding of an identified problem at a set point in time, as understood through the human lens. Even as human social values and scientific insights evolve, so too must treaties evolve which specify legal frameworks and structures impacting keystone species and related biomes. Due to costs and other myriad difficulties, only a very select number of species will be given this opportunity. While some species move into new regions and are then deemed invasive, Assisted Colonization considers that some assistance may be mandated due to the nature of humankind’s role in climate change. This moral question and ethical imperative against the backdrop of escalating climate impacts, drives the question forward; what is the potential for successfully relocating a select handful of charismatic and ecologically important life forms? Is it possible to reimagine a different, but balanced Antarctic ecosystem? Listed as a threatened species under the U.S. Endangered Species Act, a result of the ongoing loss of critical habitat by melting sea ice, polar bears have limited options for long term survival in the wild. Our current regime for safeguarding animals facing extinction frequently utilizes zoos and their breeding programs, to keep alive the genetic diversity of the species until some future time when reintroduction, somewhere, may be attempted. By exploring the potential for polar bears to be relocated to Antarctica, we must analyze the complex ethical, legal, political, financial, and biological realms, which are the backdrop to framing all questions in this arena. Can we do it? Should we do it? By utilizing an environmental ethics perspective, we propose that the Ecological Commons of the Arctic and Antarctic should not be viewed solely through the lens of human resource management needs. From this perspective, polar bears do not need our permission, they need our assistance. Antarctica therefore represents a second, if imperfect chance, to buy time for polar bears, in a world where polar regimes, not yet fully understood, are themselves quickly changing as a result of climate change.Keywords: polar bear, climate change, environmental ethics, Arctic, Antarctica, assisted colonization, treaty
Procedia PDF Downloads 421381 Computer Based Identification of Possible Molecular Targets for Induction of Drug Resistance Reversion in Multidrug Resistant Mycobacterium Tuberculosis
Authors: Oleg Reva, Ilya Korotetskiy, Marina Lankina, Murat Kulmanov, Aleksandr Ilin
Abstract:
Molecular docking approaches are widely used for design of new antibiotics and modeling of antibacterial activities of numerous ligands which bind specifically to active centers of indispensable enzymes and/or key signaling proteins of pathogens. Widespread drug resistance among pathogenic microorganisms calls for development of new antibiotics specifically targeting important metabolic and information pathways. A generally recognized problem is that almost all molecular targets have been identified already and it is getting more and more difficult to design innovative antibacterial compounds to combat the drug resistance. A promising way to overcome the drug resistance problem is an induction of reversion of drug resistance by supplementary medicines to improve the efficacy of the conventional antibiotics. In contrast to well established computer-based drug design, modeling of drug resistance reversion still is in its infancy. In this work, we proposed an approach to identification of compensatory genetic variants reducing the fitness cost associated with the acquisition of drug resistance by pathogenic bacteria. The approach was based on an analysis of the population genetic of Mycobacterium tuberculosis and on results of experimental modeling of the drug resistance reversion induced by a new anti-tuberculosis drug FS-1. The latter drug is an iodine-containing nanomolecular complex that passed clinical trials and was admitted as a new medicine against MDR-TB in Kazakhstan. Isolates of M. tuberculosis obtained on different stages of the clinical trials and also from laboratory animals infected with MDR-TB strain were characterized by antibiotic resistance, and their genomes were sequenced by the paired-end Illumina HiSeq 2000 technology. A steady increase in sensitivity to conventional anti-tuberculosis antibiotics in series of isolated treated with FS-1 was registered despite the fact that the canonical drug resistance mutations identified in the genomes of these isolates remained intact. It was hypothesized that the drug resistance phenotype in M. tuberculosis requires an adjustment of activities of many genes to compensate the fitness cost of the drug resistance mutations. FS-1 cased an aggravation of the fitness cost and removal of the drug-resistant variants of M. tuberculosis from the population. This process caused a significant increase in genetic heterogeneity of the Mtb population that was not observed in the positive and negative controls (infected laboratory animals left untreated and treated solely with the antibiotics). A large-scale search for linkage disequilibrium associations between the drug resistance mutations and genetic variants in other genomic loci allowed identification of target proteins, which could be influenced by supplementary drugs to increase the fitness cost of the drug resistance and deprive the drug-resistant bacterial variants of their competitiveness in the population. The approach will be used to improve the efficacy of FS-1 and also for computer-based design of new drugs to combat drug-resistant infections.Keywords: complete genome sequencing, computational modeling, drug resistance reversion, Mycobacterium tuberculosis
Procedia PDF Downloads 263380 Multiscale Modelization of Multilayered Bi-Dimensional Soils
Authors: I. Hosni, L. Bennaceur Farah, N. Saber, R Bennaceur
Abstract:
Soil moisture content is a key variable in many environmental sciences. Even though it represents a small proportion of the liquid freshwater on Earth, it modulates interactions between the land surface and the atmosphere, thereby influencing climate and weather. Accurate modeling of the above processes depends on the ability to provide a proper spatial characterization of soil moisture. The measurement of soil moisture content allows assessment of soil water resources in the field of hydrology and agronomy. The second parameter in interaction with the radar signal is the geometric structure of the soil. Most traditional electromagnetic models consider natural surfaces as single scale zero mean stationary Gaussian random processes. Roughness behavior is characterized by statistical parameters like the Root Mean Square (RMS) height and the correlation length. Then, the main problem is that the agreement between experimental measurements and theoretical values is usually poor due to the large variability of the correlation function, and as a consequence, backscattering models have often failed to predict correctly backscattering. In this study, surfaces are considered as band-limited fractal random processes corresponding to a superposition of a finite number of one-dimensional Gaussian process each one having a spatial scale. Multiscale roughness is characterized by two parameters, the first one is proportional to the RMS height, and the other one is related to the fractal dimension. Soil moisture is related to the complex dielectric constant. This multiscale description has been adapted to two-dimensional profiles using the bi-dimensional wavelet transform and the Mallat algorithm to describe more correctly natural surfaces. We characterize the soil surfaces and sub-surfaces by a three layers geo-electrical model. The upper layer is described by its dielectric constant, thickness, a multiscale bi-dimensional surface roughness model by using the wavelet transform and the Mallat algorithm, and volume scattering parameters. The lower layer is divided into three fictive layers separated by an assumed plane interface. These three layers were modeled by an effective medium characterized by an apparent effective dielectric constant taking into account the presence of air pockets in the soil. We have adopted the 2D multiscale three layers small perturbations model including, firstly air pockets in the soil sub-structure, and then a vegetable canopy in the soil surface structure, that is to simulate the radar backscattering. A sensitivity analysis of backscattering coefficient dependence on multiscale roughness and new soil moisture has been performed. Later, we proposed to change the dielectric constant of the multilayer medium because it takes into account the different moisture values of each layer in the soil. A sensitivity analysis of the backscattering coefficient, including the air pockets in the volume structure with respect to the multiscale roughness parameters and the apparent dielectric constant, was carried out. Finally, we proposed to study the behavior of the backscattering coefficient of the radar on a soil having a vegetable layer in its surface structure.Keywords: multiscale, bidimensional, wavelets, backscattering, multilayer, SPM, air pockets
Procedia PDF Downloads 125379 Self-Organizing Maps for Exploration of Partially Observed Data and Imputation of Missing Values in the Context of the Manufacture of Aircraft Engines
Authors: Sara Rejeb, Catherine Duveau, Tabea Rebafka
Abstract:
To monitor the production process of turbofan aircraft engines, multiple measurements of various geometrical parameters are systematically recorded on manufactured parts. Engine parts are subject to extremely high standards as they can impact the performance of the engine. Therefore, it is essential to analyze these databases to better understand the influence of the different parameters on the engine's performance. Self-organizing maps are unsupervised neural networks which achieve two tasks simultaneously: they visualize high-dimensional data by projection onto a 2-dimensional map and provide clustering of the data. This technique has become very popular for data exploration since it provides easily interpretable results and a meaningful global view of the data. As such, self-organizing maps are usually applied to aircraft engine condition monitoring. As databases in this field are huge and complex, they naturally contain multiple missing entries for various reasons. The classical Kohonen algorithm to compute self-organizing maps is conceived for complete data only. A naive approach to deal with partially observed data consists in deleting items or variables with missing entries. However, this requires a sufficient number of complete individuals to be fairly representative of the population; otherwise, deletion leads to a considerable loss of information. Moreover, deletion can also induce bias in the analysis results. Alternatively, one can first apply a common imputation method to create a complete dataset and then apply the Kohonen algorithm. However, the choice of the imputation method may have a strong impact on the resulting self-organizing map. Our approach is to address simultaneously the two problems of computing a self-organizing map and imputing missing values, as these tasks are not independent. In this work, we propose an extension of self-organizing maps for partially observed data, referred to as missSOM. First, we introduce a criterion to be optimized, that aims at defining simultaneously the best self-organizing map and the best imputations for the missing entries. As such, missSOM is also an imputation method for missing values. To minimize the criterion, we propose an iterative algorithm that alternates the learning of a self-organizing map and the imputation of missing values. Moreover, we develop an accelerated version of the algorithm by entwining the iterations of the Kohonen algorithm with the updates of the imputed values. This method is efficiently implemented in R and will soon be released on CRAN. Compared to the standard Kohonen algorithm, it does not come with any additional cost in terms of computing time. Numerical experiments illustrate that missSOM performs well in terms of both clustering and imputation compared to the state of the art. In particular, it turns out that missSOM is robust to the missingness mechanism, which is in contrast to many imputation methods that are appropriate for only a single mechanism. This is an important property of missSOM as, in practice, the missingness mechanism is often unknown. An application to measurements on one type of part is also provided and shows the practical interest of missSOM.Keywords: imputation method of missing data, partially observed data, robustness to missingness mechanism, self-organizing maps
Procedia PDF Downloads 151378 Globalisation and Diplomacy: How Can Small States Improve the Practice of Diplomacy to Secure Their Foreign Policy Objectives?
Authors: H. M. Ross-McAlpine
Abstract:
Much of what is written on diplomacy, globalization and the global economy addresses the changing nature of relationships between major powers. While the most dramatic and influential changes have resulted from these developing relationships the world is not, on deeper inspection, governed neatly by major powers. Due to advances in technology, the shifting balance of power and a changing geopolitical order, small states have the ability to exercise a greater influence than ever before. Increasingly interdependent and ever complex, our world is too delicate to be handled by a mighty few. The pressure of global change requires small states to adapt their diplomatic practices and diversify their strategic alliances and relationships. The nature and practice of diplomacy must be re-evaluated in light of the pressures resulting from globalization. This research examines: how small states can best secure their foreign policy objectives? Small state theory is used as a foundation for exploring the case study of New Zealand. The research draws on secondary sources to evaluate the existing theory in relation to modern practices of diplomacy. As New Zealand lacks the required economic and military power to play an active, influential role in international affairs what strategies are used to exert influence? Furthermore, New Zealand lies in a remote corner of the Pacific and is geographically isolated from its nearest neighbors how does this affect security and trade priorities? The findings note a significant shift since the 1970’s in New Zealand’s diplomatic relations. This shift is arguably a direct result of globalization, regionalism and a growing independence from the traditional bi-lateral relationships. The need to source predictable trade, investment and technology are an essential driving force for New Zealand’s diplomatic relations. A lack of hard power aligns New Zealand’s prosperity with a secure, rules-based international system that increases the likelihood of a stable and secure global order. New Zealand’s diplomacy and prosperity has been intrinsically reliant on its reputation. A vital component of New Zealand’s diplomacy is preserving a reputation for integrity and global responsibility. It is the use of this soft power that facilitates the influence that New Zealand enjoys on the world stage. To weave a comprehensive network of successful diplomatic relationships, New Zealand must maintain a reputation of international credibility. Globalization has substantially influenced the practice of diplomacy for New Zealand. The current world order places economic and military might in the hands of a few, subsequently requiring smaller states to use other means for securing their interests. There are clear strategies evident in New Zealand’s diplomacy practice that draw attention to how other smaller states might best secure their foreign policy objectives. While these findings are limited, as with all case study research, there is value in applying the findings to other small states struggling to secure their interests in the wake of rapid globalization.Keywords: diplomacy, foreign policy, globalisation, small state
Procedia PDF Downloads 396377 Attention Treatment for People With Aphasia: Language-Specific vs. Domain-General Neurofeedback
Authors: Yael Neumann
Abstract:
Attention deficits are common in people with aphasia (PWA). Two treatment approaches address these deficits: domain-general methods like Play Attention, which focus on cognitive functioning, and domain-specific methods like Language-Specific Attention Treatment (L-SAT), which use linguistically based tasks. Research indicates that L-SAT can improve both attentional deficits and functional language skills, while Play Attention has shown success in enhancing attentional capabilities among school-aged children with attention issues compared to standard cognitive training. This study employed a randomized controlled cross-over single-subject design to evaluate the effectiveness of these two attention treatments over 25 weeks. Four PWA participated, undergoing a battery of eight standardized tests measuring language and cognitive skills. The treatments were counterbalanced. Play Attention used EEG sensors to detect brainwaves, enabling participants to manipulate items in a computer game while learning to suppress theta activity and increase beta activity. An algorithm tracked changes in the theta-to-beta ratio, allowing points to be earned during the games. L-SAT, on the other hand, involved hierarchical language tasks that increased in complexity, requiring greater attention from participants. Results showed that for language tests, Participant 1 (moderate aphasia) aligned with existing literature, showing L-SAT was more effective than Play Attention. However, Participants 2 (very severe) and 3 and 4 (mild) did not conform to this pattern; both treatments yielded similar outcomes. This may be due to the extremes of aphasia severity: the very severe participant faced significant overall deficits, making both approaches equally challenging, while the mild participant performed well initially, leaving limited room for improvement. In attention tests, Participants 1 and 4 exhibited results consistent with prior research, indicating Play Attention was superior to L-SAT. Participant 2, however, showed no significant improvement with either program, although L-SAT had a slight edge on the Visual Elevator task, measuring switching and mental flexibility. This advantage was not sustained at the one-month follow-up, likely due to the participant’s struggles with complex attention tasks. Participant 3's results similarly did not align with prior studies, revealing no difference between the two treatments, possibly due to the challenging nature of the attention measures used. Regarding participation and ecological tests, all participants showed similar mild improvements with both treatments. This limited progress could stem from the short study duration, with only five weeks allocated for each treatment, which may not have been enough time to achieve meaningful changes affecting life participation. In conclusion, the performance of participants appeared influenced by their level of aphasia severity. The moderate PWA’s results were most aligned with existing literature, indicating better attention improvement from the domain-general approach (Play Attention) and better language improvement from the domain-specific approach (L-SAT).Keywords: attention, language, cognitive rehabilitation, neurofeedback
Procedia PDF Downloads 17376 The Effect of Artificial Intelligence on Mobile Phones and Communication Systems
Authors: Ibram Khalafalla Roshdy Shokry
Abstract:
This paper gives service feel multiple get entry to (CSMA) verbal exchange model based totally totally on SoC format method. Such model can be used to guide the modelling of the complex c084d04ddacadd4b971ae3d98fecfb2a communique systems, consequently use of such communication version is an crucial method in the creation of excessive general overall performance conversation. SystemC has been selected as it gives a homogeneous format drift for complicated designs (i.e. SoC and IP based format). We use a swarm device to validate CSMA designed version and to expose how advantages of incorporating communication early within the layout process. The wireless conversation created via the modeling of CSMA protocol that may be used to attain conversation among all of the retailers and to coordinate get proper of entry to to the shared medium (channel).The device of automobiles with wi-fiwireless communique abilities is expected to be the important thing to the evolution to next era intelligent transportation systems (ITS). The IEEE network has been continuously operating at the development of an wireless vehicular communication protocol for the enhancement of wi-fi get admission to in Vehicular surroundings (WAVE). Vehicular verbal exchange systems, known as V2X, help car to car (V2V) and automobile to infrastructure (V2I) communications. The wi-ficiencywireless of such communication systems relies upon on several elements, amongst which the encircling surroundings and mobility are prominent. as a result, this observe makes a speciality of the evaluation of the actual performance of vehicular verbal exchange with unique cognizance on the effects of the actual surroundings and mobility on V2X verbal exchange. It begins by wi-fi the actual most range that such conversation can guide and then evaluates V2I and V2V performances. The Arada LocoMate OBU transmission device changed into used to check and evaluate the effect of the transmission range in V2X verbal exchange. The evaluation of V2I and V2V communique takes the real effects of low and excessive mobility on transmission under consideration.Multiagent systems have received sizeable attention in numerous wi-fields, which include robotics, independent automobiles, and allotted computing, where a couple of retailers cooperate and speak to reap complicated duties. wi-figreen communication among retailers is a critical thing of these systems, because it directly influences their usual performance and scalability. This scholarly work gives an exploration of essential communication factors and conducts a comparative assessment of diverse protocols utilized in multiagent systems. The emphasis lies in scrutinizing the strengths, weaknesses, and applicability of those protocols across diverse situations. The studies additionally sheds light on rising tendencies within verbal exchange protocols for multiagent systems, together with the incorporation of device mastering strategies and the adoption of blockchain-based totally solutions to make sure comfy communique. those developments offer valuable insights into the evolving landscape of multiagent structures and their verbal exchange protocols.Keywords: communication, multi-agent systems, protocols, consensussystemC, modelling, simulation, CSMA
Procedia PDF Downloads 25375 Exploring the Impact of Mobility-Related Treatments (Drug and Non-Pharmacological) on Independence and Wellbeing in Parkinson’s Disease - A Qualitative Synthesis
Authors: Cameron Wilson, Megan Hanrahan, Katie Brittain, Riona McArdle, Alison Keogh, Lynn Rochester
Abstract:
Background: The loss of mobility and functional dependence is a significant marker in the progression of neurodegenerative diseases such as Parkinson’s Disease (PD). Pharmacological, surgical, and therapeutic treatments are available that can help in the management and amelioration of PD symptoms; however, these only prolong more severe symptoms. Accordingly, ensuring people with PD can maintain independence and a healthy wellbeing are essential in establishing an effective treatment option for those afflicted. Existing literature reviews have examined experiences in engaging with PD treatment options and the impact of PD on independence and wellbeing. Although, the literature fails to explore the influence of treatment options on independence and wellbeing and therefore misses what people value in their treatment. This review is the first that synthesises the impact of mobility-related treatments on independence and wellbeing in people with PD and their carers, offering recommendations to clinical practice and provides a conceptual framework (in development) for future research and practice. Objectives: To explore the impact of mobility-related treatment (both pharmacological and non-pharmacological) on the independence and wellbeing of people with PD and their carers. To propose a conceptual framework to patients, carers and clinicians which captures the qualities people with PD value as part of their treatment. Methods: We performed a critical interpretive synthesis of qualitative evidence, searching six databases for reports that explored the impact of mobility-related treatments (both drug and non-pharmacological) on independence and wellbeing in Parkinson’s Disease. The types of treatments included medication (Levodopa and Amantadine), dance classes, Deep-Brain Stimulation, aquatic therapies, physical rehabilitation, balance training and foetal transplantation. Data was extracted, and quality was assessed using an adapted version of the NICE Quality Appraisal Tool Appendix H before being synthesised according to the critical interpretive synthesis framework and meta-ethnography process. Results: From 2301 records, 28 were eligible. Experiences and impact of treatment pathway on independence and wellbeing was similar across all types of treatments and are described by five inter-related themes: (i) desire to maintain independence, (ii) treatment as a social experience during and after, (iii) medication to strengthen emotional health, (iv) recognising physical capacity and (v) emphasising the personal journey of Parkinson’s treatments. Conclusion: There is a complex and inter-related experience and effect of PD treatments common across all types of treatment. The proposed conceptual framework (in development) provides patients, carers, and clinicians recommendations to personalise the delivery of PD treatment, thereby potentially improving adherence and effectiveness. This work is vital to disseminate as PD treatment transitions from subjective and clinically captured assessments to a more personalised process supplemented using wearable technology.Keywords: parkinson's disease, medication, treatment, dance, review, healthcare, delivery, levodopa, social, emotional, psychological, personalised healthcare
Procedia PDF Downloads 89374 Development of Alternative Fuels Technologies for Transportation
Authors: Szymon Kuczynski, Krystian Liszka, Mariusz Laciak, Andrii Oliinyk, Adam Szurlej
Abstract:
Currently, in automotive transport to power vehicles, almost exclusively hydrocarbon based fuels are used. Due to increase of hydrocarbon fuels consumption, quality parameters are tightend for clean environment. At the same time efforts are undertaken for development of alternative fuels. The reasons why looking for alternative fuels for petroleum and diesel are: to increase vehicle efficiency and to reduce the environmental impact, reduction of greenhouse gases emissions and savings in consumption of limited oil resources. Significant progress was performed on development of alternative fuels such as methanol, ethanol, natural gas (CNG / LNG), LPG, dimethyl ether (DME) and biodiesel. In addition, biggest vehicle manufacturers work on fuel cell vehicles and its introduction to the market. Alcohols such as methanol and ethanol create the perfect fuel for spark-ignition engines. Their advantages are high-value antiknock which determines their application as additive (10%) to unleaded petrol and relative purity of produced exhaust gasses. Ethanol is produced in distillation process of plant products, which value as a food can be irrational. Ethanol production can be costly also for the entire economy of the country, because it requires a large complex distillation plants, large amounts of biomass and finally a significant amount of fuel to sustain the process. At the same time, the fermentation process of plants releases into the atmosphere large quantities of carbon dioxide. Natural gas cannot be directly converted into liquid fuels, although such arrangements have been proposed in the literature. Going through stage of intermediates is inevitable yet. Most popular one is conversion to methanol, which can be processed further to dimethyl ether (DME) or olefin (ethylene and propylene) for the petrochemical sector. Methanol uses natural gas as a raw material, however, requires expensive and advanced production processes. In relation to pollution emissions, the optimal vehicle fuel is LPG which is used in many countries as an engine fuel. Production of LPG is inextricably linked with production and processing of oil and gas, and which represents a small percentage. Its potential as an alternative for traditional fuels is therefore proportionately reduced. Excellent engine fuel may be biogas, however, follows to the same limitations as ethanol - the same production process is used and raw materials. Most essential fuel in the campaign of environment protection against pollution is natural gas. Natural gas as fuel may be either compressed (CNG) or liquefied (LNG). Natural gas can also be used for hydrogen production in steam reforming. Hydrogen can be used as a basic starting material for the chemical industry, an important raw material in the refinery processes, as well as a fuel vehicle transportation. Natural gas can be used as CNG which represents an excellent compromise between the availability of the technology that is proven and relatively cheap to use in many areas of the automotive industry. Natural gas can also be seen as an important bridge to other alternative sources of energy derived from fuel and harmless to the environment. For these reasons CNG as a fuel stimulates considerable interest in the worldwide.Keywords: alternative fuels, CNG (Compressed Natural Gas), LNG (Liquefied Natural Gas), NGVs (Natural Gas Vehicles)
Procedia PDF Downloads 181373 Hardware Implementation for the Contact Force Reconstruction in Tactile Sensor Arrays
Authors: María-Luisa Pinto-Salamanca, Wilson-Javier Pérez-Holguín
Abstract:
Reconstruction of contact forces is a fundamental technique for analyzing the properties of a touched object and is essential for regulating the grip force in slip control loops. This is based on the processing of the distribution, intensity, and direction of the forces during the capture of the sensors. Currently, efficient hardware alternatives have been used more frequently in different fields of application, allowing the implementation of computationally complex algorithms, as is the case with tactile signal processing. The use of hardware for smart tactile sensing systems is a research area that promises to improve the processing time and portability requirements of applications such as artificial skin and robotics, among others. The literature review shows that hardware implementations are present today in almost all stages of smart tactile detection systems except in the force reconstruction process, a stage in which they have been less applied. This work presents a hardware implementation of a model-driven reported in the literature for the contact force reconstruction of flat and rigid tactile sensor arrays from normal stress data. From the analysis of a software implementation of such a model, this implementation proposes the parallelization of tasks that facilitate the execution of matrix operations and a two-dimensional optimization function to obtain a vector force by each taxel in the array. This work seeks to take advantage of the parallel hardware characteristics of Field Programmable Gate Arrays, FPGAs, and the possibility of applying appropriate techniques for algorithms parallelization using as a guide the rules of generalization, efficiency, and scalability in the tactile decoding process and considering the low latency, low power consumption, and real-time execution as the main parameters of design. The results show a maximum estimation error of 32% in the tangential forces and 22% in the normal forces with respect to the simulation by the Finite Element Modeling (FEM) technique of Hertzian and non-Hertzian contact events, over sensor arrays of 10×10 taxels of different sizes. The hardware implementation was carried out on an MPSoC XCZU9EG-2FFVB1156 platform of Xilinx® that allows the reconstruction of force vectors following a scalable approach, from the information captured by means of tactile sensor arrays composed of up to 48 × 48 taxels that use various transduction technologies. The proposed implementation demonstrates a reduction in estimation time of x / 180 compared to software implementations. Despite the relatively high values of the estimation errors, the information provided by this implementation on the tangential and normal tractions and the triaxial reconstruction of forces allows to adequately reconstruct the tactile properties of the touched object, which are similar to those obtained in the software implementation and in the two FEM simulations taken as reference. Although errors could be reduced, the proposed implementation is useful for decoding contact forces for portable tactile sensing systems, thus helping to expand electronic skin applications in robotic and biomedical contexts.Keywords: contact forces reconstruction, forces estimation, tactile sensor array, hardware implementation
Procedia PDF Downloads 195372 Computational and Experimental Study of the Mechanics of Heart Tube Formation in the Chick Embryo
Authors: Hadi S. Hosseini, Larry A. Taber
Abstract:
In the embryo, heart is initially a simple tubular structure that undergoes complex morphological changes as it transforms into a four-chambered pump. This work focuses on mechanisms that create heart tube (HT). The early embryo is composed of three relatively flat primary germ layers called endoderm, mesoderm, and ectoderm. Precardiac cells located within bilateral regions of the mesoderm called heart fields (HFs) fold and fuse along the embryonic midline to create the HT. The right and left halves of this plate fold symmetrically to bring their upper edges into contact along the midline, where they fuse. In a region near the fusion line, these layers then separate to generate the primitive HT and foregut, which then extend vertically. The anterior intestinal portal (AIP) is the opening at the caudal end of the foregut, which descends as the HT lengthens. The biomechanical mechanisms that drive this folding are poorly understood. Our central hypothesis is that folding is caused by differences in growth between the endoderm and mesoderm while subsequent extension is driven by contraction along the AIP. The feasibility of this hypothesis is examined using experiments with chick embryos and finite-element modeling (FEM). Fertilized white Leghorn chicken eggs were incubated for approximately 22-33 hours until appropriate Hamburger and Hamilton stage (HH5 to HH9) was reached. To inhibit contraction, embryos were cultured in media containing blebbistatin (myosin II inhibitor) for 18h. Three-dimensional models were created using ABAQUS (D. S. Simulia). The initial geometry consists of a flat plate including two layers representing the mesoderm and endoderm. Tissue was considered as a nonlinear elastic material with growth and contraction (negative growth) simulated using a theory, in which the total deformation gradient is given by F=F^*.G, where G is growth tensor and F* is the elastic deformation gradient tensor. In embryos exposed to blebbistatin, initial folding and AIP descension occurred normally. However, after HFs partially fused to create the upper part of the HT, fusion, and AIP descension stopped, and the HT failed to grow longer. These results suggest that cytoskeletal contraction is required only for the later stages of HT formation. In the model, a larger biaxial growth rate in the mesoderm compared to the endoderm causes the bilayered plate to bend ventrally, as the upper edge moves toward the midline, where it 'fuses' with the other half . This folding creates the upper section of the HT, as well as the foregut pocket bordered by the AIP. After this phase completes by stage HH7, contraction along the arch-shaped AIP pulls the lower edge of the plate downward, stretching the two layers. Results given by model are in reasonable agreement with experimental data for the shape of HT, as well as patterns of stress and strain. In conclusion, results of our study support our hypothesis for the creation of the heart tube.Keywords: heart tube formation, FEM, chick embryo, biomechanics
Procedia PDF Downloads 296371 A Protocol Study of Accessibility: Physician’s Perspective Regarding Disability and Continuum of Care
Authors: Sidra Jawed
Abstract:
The accessibility constructs and the body privilege discourse has been a major problem while dealing with health inequities and inaccessibility. The inherent problem in this arbitrary view of disability is that disability would never be the productive way of living. For past thirty years, disability activists have been working to differentiate ‘impairment’ from ‘disability’ and probing for more understanding of limitation imposed by society, this notion is ultimately known as the Social Model of Disability. The vulnerable population as disability community remains marginalized and seen relentlessly fighting to highlight the importance of social factors. It does not only constitute physical architectural barriers and famous blue symbol of access to the healthcare but also invisible, intangible barriers as attitudes and behaviours. Conventionally the idea of ‘disability’ has been laden with prejudiced perception amalgamating with biased attitude. Equity in contemporary setup necessitates the restructuring of organizational structure. Apparently simple, the complex interplay of disability and contemporary healthcare set up often ends up at negotiating vital components of basic healthcare needs. The role of society is indispensable when it comes to people with disability (PWD), everything from the access to healthcare to timely interventions are strongly related to the set up in place and the attitude of healthcare providers. It is vital to understand the association between assumptions and the quality of healthcare PWD receives in our global healthcare setup. Most of time the crucial physician-patient relationship with PWD is governed by the negative assumptions of the physicians. The multifaceted, troubled patient-physicians’ relationship has been neglected in past. To compound it, insufficient work has been done to explore physicians’ perspective about the disability and access to healthcare PWD have currently. This research project is directed towards physicians’ perspective on the intersection of health and access of healthcare for PWD. The principal aim of the study is to explore the perception of disability in family medicine physicians, highlighting the underpinning of medical perspective in healthcare institution. In the quest of removing barriers, the first step must be to identify the barriers and formulate a plan for future policies, involving all the stakeholders. There would be semi-structured interviews to explore themes as accessibility, medical training, construct of social model and medical model of disability, time limitations, financial constraints. The main research interest is to identify the obstacles to inclusion and marginalization continuing from the basic living necessities to wide health inequity in present society. Physicians point of view is largely missing from the research landscape and the current forum of knowledge with regards to physicians’ standpoint. This research will provide policy makers with a starting point and comprehensive background knowledge that can be a stepping stone for future researches and furthering the knowledge translation process to strengthen healthcare. Additionally, it would facilitate the process of knowledge translation between the much needed medical and disability community.Keywords: disability, physicians, social model, accessibility
Procedia PDF Downloads 222370 Design and Integration of an Energy Harvesting Vibration Absorber for Rotating System
Authors: F. Infante, W. Kaal, S. Perfetto, S. Herold
Abstract:
In the last decade the demand of wireless sensors and low-power electric devices for condition monitoring in mechanical structures has been strongly increased. Networks of wireless sensors can potentially be applied in a huge variety of applications. Due to the reduction of both size and power consumption of the electric components and the increasing complexity of mechanical systems, the interest of creating dense nodes sensor networks has become very salient. Nevertheless, with the development of large sensor networks with numerous nodes, the critical problem of powering them is drawing more and more attention. Batteries are not a valid alternative for consideration regarding lifetime, size and effort in replacing them. Between possible alternative solutions for durable power sources useable in mechanical components, vibrations represent a suitable source for the amount of power required to feed a wireless sensor network. For this purpose, energy harvesting from structural vibrations has received much attention in the past few years. Suitable vibrations can be found in numerous mechanical environments including automotive moving structures, household applications, but also civil engineering structures like buildings and bridges. Similarly, a dynamic vibration absorber (DVA) is one of the most used devices to mitigate unwanted vibration of structures. This device is used to transfer the primary structural vibration to the auxiliary system. Thus, the related energy is effectively localized in the secondary less sensitive structure. Then, the additional benefit of harvesting part of the energy can be obtained by implementing dedicated components. This paper describes the design process of an energy harvesting tuned vibration absorber (EHTVA) for rotating systems using piezoelectric elements. The energy of the vibration is converted into electricity rather than dissipated. The device proposed is indeed designed to mitigate torsional vibrations as with a conventional rotational TVA, while harvesting energy as a power source for immediate use or storage. The resultant rotational multi degree of freedom (MDOF) system is initially reduced in an equivalent single degree of freedom (SDOF) system. The Den Hartog’s theory is used for evaluating the optimal mechanical parameters of the initial DVA for the SDOF systems defined. The performance of the TVA is operationally assessed and the vibration reduction at the original resonance frequency is measured. Then, the design is modified for the integration of active piezoelectric patches without detuning the TVA. In order to estimate the real power generated, a complex storage circuit is implemented. A DC-DC step-down converter is connected to the device through a rectifier to return a fixed output voltage. Introducing a big capacitor, the energy stored is measured at different frequencies. Finally, the electromechanical prototype is tested and validated achieving simultaneously reduction and harvesting functions.Keywords: energy harvesting, piezoelectricity, torsional vibration, vibration absorber
Procedia PDF Downloads 147369 Microbiological and Physicochemical Evaluation of Traditional Greek Kopanisti Cheese Produced by Different Starter Cultures
Authors: M. Kazou, A. Gavriil, O. Kalagkatsi, T. Paschos, E. Tsakalidou
Abstract:
Kopanisti cheese is a Greek soft Protected Designation of Origin (PDO) cheese made of raw cow, sheep or goat milk, or mixtures of them, with similar organoleptic characteristics to that of Roquefort cheese. Traditional manufacturing of Kopanisti cheese is limited in small-scale dairies, without the addition of starter cultures. Instead, an amount of over-mature Kopanisti cheese, called Mana Kopanisti, is used to initiate ripening. Therefore, the selection of proper starter cultures and the understanding of the contribution of various microbial groups to its overall quality is crucial for the production of a high-quality final product with standardized organoleptic and physicochemical characteristics. Taking the above into account, the aim of the present study was the investigation of Kopanisti cheese microbiota and its role in cheese quality. For this purpose, four different types of Kopanisti were produced in triplicates, all with pasteurized cow milk, with the addition of (A) the typical mesophilic species Lactococcus lactis and Lactobacillus paracasei used as starters in the production of soft spread cheeses, (B) strains of Lactobacillus acidipiscis and Lactobacillus rennini previously isolated from Kopanisti and Mana Kopanisti, (C) all the species from (A) and (B) as inoculum, and finally (D) the species from (A) and Mana Kopanisti. Physicochemical and microbiological analysis was performed for milk and cheese samples during ripening. Enumeration was performed for major groups of lactic acid bacteria (LAB), total mesophilic bacteria, yeasts as well as hygiene indicator microorganisms. Bacterial isolates from all the different LAB groups, apart from enterococci, alongside yeasts isolates, were initially grouped using repetitive sequence-based polymerase chain reaction (rep-PCR) and then identified at the species level using 16S rRNA gene and internal transcribed spacer (ITS) DNA region sequencing, respectively. Sensory evaluation was also performed for final cheese samples at the end of the ripening period (35 days). Based on the results of the classical microbiological analysis, the average counts of the total mesophilic bacteria and LAB, apart from enterococci, ranged between 7 and 10 log colony forming unit (CFU) g⁻¹, phychrotrophic bacteria, and yeast extract glucose chloramphenicol (YGC) isolates between 4 and 8 log CFU g⁻¹, while coliforms and enterococci up to 2 log CFU g⁻¹ throughout ripening in cheese samples A, C and D. In contrast, in cheese sample B, the average counts of the total mesophilic bacteria and LAB, apart from enterococci, phychrotrophic bacteria, and YGC isolates ranged between 0 and 10 log CFU g⁻¹ and coliforms and enterococci up to 2 log CFU g⁻¹. Although the microbial counts were not that different among samples, identification of the bacterial and yeasts isolates revealed the complex microbial community structure present in each cheese sample. Differences in the physicochemical characteristics among the cheese samples were also observed, with pH ranging from 4.3 to 5.3 and moisture from 49.6 to 58.0 % in the final cheese products. Interestingly, the sensory evaluation also revealed differences among samples, with cheese sample B ranking first based on the total score. Overall, the combination of these analyses highlighted the impact of different starter cultures on the Kopanisti microbiota as well as on the physicochemical and sensory characteristics of the final product.Keywords: Kopanisti cheese, microbiota, classical microbiological analysis, physicochemical analysis
Procedia PDF Downloads 135368 From Shelf to Shell - The Corporate Form in the Era of Over-Regulation
Authors: Chrysthia Papacleovoulou
Abstract:
The era of de-regulation, off-shore and tax haven jurisdictions, and shelf companies has come to an end. The usage of complex corporate structures involving trust instruments, special purpose vehicles, holding-subsidiaries in offshore haven jurisdictions, and taking advantage of tax treaties is soaring. States which raced to introduce corporate friendly legislation, tax incentives, and creative international trust law in order to attract greater FDI are now faced with regulatory challenges and are forced to revisit the corporate form and its tax treatment. The fiduciary services industry, which dominated over the last 3 decades, is now striving to keep up with the new regulatory framework as a result of a number of European and international legislative measures. This article considers the challenges to the company and the corporate form as a result of the legislative measures on tax planning and tax avoidance, CRS reporting, FATCA, CFC rules, OECD’s BEPS, the EU Commission's new transparency rules for intermediaries that extends to tax advisors, accountants, banks & lawyers who design and promote tax planning schemes for their clients, new EU rules to block artificial tax arrangements and new transparency requirements for financial accounts, tax rulings and multinationals activities (DAC 6), G20's decision for a global 15% minimum corporate tax and banking regulation. As a result, states are found in a race of over-regulation and compliance. These legislative measures constitute a global up-side down tax-harmonisation. Through the adoption of the OECD’s BEPS, states agreed to an international collaboration to end tax avoidance and reform international taxation rules. Whilst the idea was to ensure that multinationals would pay their fair share of tax everywhere they operate, an indirect result of the aforementioned regulatory measures was to attack private clients-individuals who -over the past 3 decades- used the international tax system and jurisdictions such as Marshal Islands, Cayman Islands, British Virgin Islands, Bermuda, Seychelles, St. Vincent, Jersey, Guernsey, Liechtenstein, Monaco, Cyprus, and Malta, to name but a few, to engage in legitimate tax planning and tax avoidance. Companies can no longer maintain bank accounts without satisfying the real substance test. States override the incorporation doctrine theory and apply a real seat or real substance test in taxing companies and their activities, targeting even the beneficial owners personally with tax liability. Tax authorities in civil law jurisdictions lift the corporate veil through the public registries of UBO Registries and Trust Registries. As a result, the corporate form and the doctrine of limited liability are challenged in their core. Lastly, this article identifies the development of new instruments, such as funds and private placement insurance policies, and the trend of digital nomad workers. The baffling question is whether industry and states can meet somewhere in the middle and exit this over-regulation frenzy.Keywords: company, regulation, TAX, corporate structure, trust vehicles, real seat
Procedia PDF Downloads 139367 Species Profiling of Scarab Beetles with the Help of Light Trap in Western Himalayan Region of Uttarakhand
Authors: Ajay Kumar Pandey
Abstract:
White grub (Coleoptera: Scarabaeidae), locally known as Kurmula, Pagra, Chinchu, is a major destructive pest in western Himalayan region of Uttarakhand state of India. Various crops like cereals (up land paddy, wheat, and barley), vegetables (capsicum, cabbage, tomato, cauliflower, carrot etc) and some pulse (like pigeon pea, green gram, black gram) are grown with limited availability of primary resources. Among the various limitations in successful cultivation of these crops, white grub has been proved a major constraint in for all crops grown in hilly area. The losses incurred due to white grubs are huge in case of commercial crops like sugarcane, groundnut, potato, maize and upland rice. Moreover, it has been proved major constraint in potato production in mid and higher hills of India. Adults emerge in May-June following the onset of monsoon and thereafter defoliate the apple, apricot, plum, and walnut during night while 2nd and 3rd instar grubs feed on live roots of cultivated as well as non cultivated crops from August to January. Survey was conducted in hilly (Pauri and Tehri) as well as plain area (Haridwar district) of Uttarakhand state. Collection of beetle was done from various locations from August to September of five consecutive years with the help of light trap and directly from host plant. The grub was also collected by excavating one square meter area from different locations and reared in laboratory to find out adult. During the collection, the diseased or dead cadaver were also collected and brought in the laboratory and identified the causal organisms. Total 25 species of white grub was identified out of which Holotrichia longipennis, Anomala dimidiata, Holotrichia lineatopennis, Maladera insanabilis, Brahmina sp. make complex problem in different area of Uttarakhand where they cause severe damage to various crops. During the survey, it was observed that white grubs beetles have variation in preference of host plant, even in choice of fruit and leaves of host plant. It was observed that, a white grub species, which identified as Lepidiota mansueta Burmeister., was causing severe havoc to sugarcane crop grown in major sugarcane growing belt of Haridwar district. The study also revealed that Bacillus cereus, Beauveria bassiana, Metarhizium anisopliae, Steinernema, Heterorhabditis are major disease causing agents in immature stage of white grub under rain-fed condition of Uttarakhand which caused 15.55 to 21.63 percent natural mortality of grubs with an average of 18.91 percent. However, among the microorganisms, B. cereus found to be significantly more efficient (7.03 percent mortality) then the entomopathogenic fungi (3.80 percent mortality) and nematodes (3.20 percent mortality).Keywords: Lepidiota, profiling, Uttarakhand, whitegrub
Procedia PDF Downloads 220366 A Comparative Study on the Influencing Factors of Urban Residential Land Prices Among Regions
Authors: Guo Bingkun
Abstract:
With the rapid development of China's social economy and the continuous improvement of urbanization level, people's living standards have undergone tremendous changes, and more and more people are gathering in cities. The demand for urban residents' housing has been greatly released in the past decade. The demand for housing and related construction land required for urban development has brought huge pressure to urban operations, and land prices have also risen rapidly in the short term. On the other hand, from the comparison of the eastern and western regions of China, there are also great differences in urban socioeconomics and land prices in the eastern, central and western regions. Although judging from the current overall market development, after more than ten years of housing market reform and development, the quality of housing and land use efficiency in Chinese cities have been greatly improved. However, the current contradiction between land demand for urban socio-economic development and land supply, especially the contradiction between land supply and demand for urban residential land, has not been effectively alleviated. Since land is closely linked to all aspects of society, changes in land prices will be affected by many complex factors. Therefore, this paper studies the factors that may affect urban residential land prices and compares them among eastern, central and western cities, and finds the main factors that determine the level of urban residential land prices. This paper provides guidance for urban managers in formulating land policies and alleviating land supply and demand. It provides distinct ideas for improving urban planning and improving urban planning and promotes the improvement of urban management level. The research in this paper focuses on residential land prices. Generally, the indicators for measuring land prices mainly include benchmark land prices, land price level values, parcel land prices, etc. However, considering the requirements of research data continuity and representativeness, this paper chooses to use residential land price level values. Reflects the status of urban residential land prices. First of all, based on the existing research at home and abroad, the paper considers the two aspects of land supply and demand and, based on basic theoretical analysis, determines some factors that may affect urban housing, such as urban expansion, taxation, land reserves, population, and land benefits. Factors of land price and correspondingly selected certain representative indicators. Secondly, using conventional econometric analysis methods, we established a model of factors affecting urban residential land prices, quantitatively analyzed the relationship and intensity of influencing factors and residential land prices, and compared the differences in the impact of urban residential land prices between the eastern, central and western regions. Compare similarities. Research results show that the main factors affecting China's urban residential land prices are urban expansion, land use efficiency, taxation, population size, and residents' consumption. Then, the main reason for the difference in residential land prices between the eastern, central and western regions is the differences in urban expansion patterns, industrial structures, urban carrying capacity and real estate development investment.Keywords: urban housing, urban planning, housing prices, comparative study
Procedia PDF Downloads 49365 Working Without a Safety Net: Exploring Struggles and Dilemmas Faced by Greek Orthodox Married Clergy Through a Mental Health Lens, in the Australian Context
Authors: Catherine Constantinidis (Nee Tsacalos)
Abstract:
This paper presents one aspect of the larger Masters qualitative study exploring the roles of married Greek Orthodox clergy, the Priest and Presbytera, under the wing of the Greek Orthodox Archdiocese of Australia. This ground breaking research necessitated the creation of primary data within a phenomenological paradigm drawing from lived experiences of the Priests and Presbyteres in contemporary society. As a Social Worker, a bilingual (Greek/English) Mental Health practitioner and a Presbytera, the questions constantly raised and pondered are: Who do the Priest and Presbytera turn to when they experience difficulties or problems? Where do they go for support? What is in place for their emotional and psychological health and well-being? Who cares for the spiritual carer? Who is there to catch our falling clergy and their wives? What is their 'safety net'? Identified phenomena of angst, stress, frustration and confusion experienced by the Priest and (by extension) the Presbytera, within their position, coupled with basic assumptions, perceptions and expectations about their roles, the role of the organisation (the Church), and their role as spouse often caused confusion and in some cases conflict. Unpacking this complex and multi-dimensional relationship highlighted not only the roller coaster of emotions, potentially affecting their physical and mental health, but also the impact on the interwoven relationships of marriage and ministry. The author considers these phenomena in the light of bilingual cultural and religious organisational practice frameworks, specifically the Greek Orthodox Church, whilst filtering these findings through a mental health lens. One could argue that it is an expectation that clergy (and by default their wives) take on the responsibility to be kind, nurturing and supportive to others. However, when it comes to taking care of self, they are not nearly as kind. This research looks at a recurrent theme throughout the interviews where all participants talked about limited support systems and poor self care strategies and the impact this has on their ministry, mental, emotional, and physical health and ultimately on their relationships with self and others. The struggle all participants encountered at some point in their ministry was physical, spiritual and psychological burn out. The overall aim of the researcher is to provide a voice for the Priest and the Presbytera painting a clearer picture of these roles and facilitating an awareness of struggles and dilemmas faced in their ministry. It is hoped these identified gaps in self care strategies and support systems will provide solid foundations for building a culturally sensitive, empathetic and effective support system framework, incorporating the spiritual and psychological well-being of the Priest and Presbytera, a ‘safety net’. A supplementary aim is to inform and guide ministry practice frameworks for clergy, spouses, the church hierarchy and religious organisations on a local and global platform incorporating some sort of self-care system.Keywords: care for the carer, mental health, Priest, Presbytera, religion, support system
Procedia PDF Downloads 392364 Seismic Response Control of Multi-Span Bridge Using Magnetorheological Dampers
Authors: B. Neethu, Diptesh Das
Abstract:
The present study investigates the performance of a semi-active controller using magneto-rheological dampers (MR) for seismic response reduction of a multi-span bridge. The application of structural control to the structures during earthquake excitation involves numerous challenges such as proper formulation and selection of the control strategy, mathematical modeling of the system, uncertainty in system parameters and noisy measurements. These problems, however, need to be tackled in order to design and develop controllers which will efficiently perform in such complex systems. A control algorithm, which can accommodate un-certainty and imprecision compared to all the other algorithms mentioned so far, due to its inherent robustness and ability to cope with the parameter uncertainties and imprecisions, is the sliding mode algorithm. A sliding mode control algorithm is adopted in the present study due to its inherent stability and distinguished robustness to system parameter variation and external disturbances. In general a semi-active control scheme using an MR damper requires two nested controllers: (i) an overall system controller, which derives the control force required to be applied to the structure and (ii) an MR damper voltage controller which determines the voltage required to be supplied to the damper in order to generate the desired control force. In the present study a sliding mode algorithm is used to determine the desired optimal force. The function of the voltage controller is to command the damper to produce the desired force. The clipped optimal algorithm is used to find the command voltage supplied to the MR damper which is regulated by a semi active control law based on sliding mode algorithm. The main objective of the study is to propose a robust semi active control which can effectively control the responses of the bridge under real earthquake ground motions. Lumped mass model of the bridge is developed and time history analysis is carried out by solving the governing equations of motion in the state space form. The effectiveness of MR dampers is studied by analytical simulations by subjecting the bridge to real earthquake records. In this regard, it may also be noted that the performance of controllers depends, to a great extent, on the characteristics of the input ground motions. Therefore, in order to study the robustness of the controller in the present study, the performance of the controllers have been investigated for fourteen different earthquake ground motion records. The earthquakes are chosen in such a way that all possible characteristic variations can be accommodated. Out of these fourteen earthquakes, seven are near-field and seven are far-field. Also, these earthquakes are divided into different frequency contents, viz, low-frequency, medium-frequency, and high-frequency earthquakes. The responses of the controlled bridge are compared with the responses of the corresponding uncontrolled bridge (i.e., the bridge without any control devices). The results of the numerical study show that the sliding mode based semi-active control strategy can substantially reduce the seismic responses of the bridge showing a stable and robust performance for all the earthquakes.Keywords: bridge, semi active control, sliding mode control, MR damper
Procedia PDF Downloads 124363 Rumen Epithelium Development of Bovine Fetuses and Newborn Calves
Authors: Juliana Shimara Pires Ferrão, Letícia Palmeira Pinto, Francisco Palma Rennó, Francisco Javier Hernandez Blazquez
Abstract:
The ruminant stomach is a complex and multi-chambered organ. Although the true stomach (abomasum) is fully differentiated and functional at birth, the same does not occur with the rumen chamber. At this moment, rumen papillae are small or nonexistent. The papillae only fully develop after weaning and during calf growth. Papillae development and ruminal epithelium specialization during the fetus growth and at birth must be two interdependent processes that will prepare the rumen to adapt to ruminant adult feeding. The microscopic study of rumen epithelium at these early phases of life is important to understand how this structure prepares the rumen to deal with the following weaning processes and its functional activation. Samples of ruminal mucosa of bovine fetuses (110- and 150 day-old) and newborn calves were collected (dorsal and ventral portions) and processed for light and electron microscopy and immunohistochemistry. The basal cell layer of the stratified pavimentous epithelium present in different ruminal portions of the fetuses was thicker than the same portions of newborn calves. The superficial and intermediate epithelial layers of 150 day-old fetuses were thicker than those found in the other 2 studied ages. At this age (150 days), dermal papillae begin to invade the intermediate epithelial layer which gradually disappears in newborn calves. At birth, the ruminal papillae project from the epithelial surface, probably by regression of the epithelial cells (transitory cells) surrounding the dermal papillae. The PCNA cell proliferation index (%) was calculated for all epithelial samples. Fetuses 150 day-old showed increased cell proliferation in basal cell layer (Dorsal Portion: 84.2%; Ventral Portion: 89.8%) compared to other ages studied. Newborn calves showed an intermediate index (Dorsal Portion: 65.1%; Ventral Portion: 48.9%), whereas 110 day-old fetuses had the lowest proliferation index (Dorsal Portion: 57.2%; Ventral Portion: 20.6%). Regarding the transitory epithelium, 110 day-old fetuses showed the lowest proliferation index (Dorsal Portion: 44.6%; Ventral Portion: 20.1%), 150 day-old fetuses showed an intermediate proliferation index (Dorsal Portion: 57.5%; Ventral Portion: 71.1%) and newborn calves presented a higher proliferation index (Dorsal Portion: 75.1%; Ventral Portion: 19.6%). Under TEM, the 110- and 150 day-old fetuses presented thicker and poorly organized basal cell layer, with large nuclei and dense cytoplasm. In newborn calves, the basal cell layer was more organized and with fewer layers, but typically similar in both regions of the rumen. For the transitory epithelium, fetuses displayed larger cells than those found in newborn calves with less electrondense cytoplasm than that found in the basal cells. The ruminal dorsal portion has an overall higher cell proliferation rate than the ventral portion. Thus we can infer that the dorsal portion may have a higher cell activity than the ventral portion during ruminal development. Moreover, the basal cell layer is thicker in the 110- and 150 day-old fetuses than in the newborn calves. The transitory epithelium, which is much reduced, at birth may have a structural support function of the developing dermal papillae. When it regresses or is sheared off, the papillae are “carved out” from the surrounding epithelial layer.Keywords: bovine, calf, epithelium, fetus, hematoxylin-eosin, immunohistochemistry, TEM, Rumen
Procedia PDF Downloads 387362 Improving School Design through Diverse Stakeholder Participation in the Programming Phase
Authors: Doris C. C. K. Kowaltowski, Marcella S. Deliberador
Abstract:
The architectural design process, in general, is becoming more complex, as new technical, social, environmental, and economical requirements are imposed. For school buildings, this scenario is also valid. The quality of a school building depends on known design criteria and professional knowledge, as well as feedback from building performance assessments. To attain high-performance school buildings, a design process should add a multidisciplinary team, through an integrated process, to ensure that the various specialists contribute at an early stage to design solutions. The participation of stakeholders is of special importance at the programming phase when the search for the most appropriate design solutions is underway. The composition of a multidisciplinary team should comprise specialists in education, design professionals, and consultants in various fields such as environmental comfort and psychology, sustainability, safety and security, as well as administrators, public officials and neighbourhood representatives. Users, or potential users (teachers, parents, students, school officials, and staff), should be involved. User expectations must be guided, however, toward a proper understanding of a response of design to needs to avoid disappointment. In this context, appropriate tools should be introduced to organize such diverse participants and ensure a rich and focused response to needs and a productive outcome of programming sessions. In this paper, different stakeholder in a school design process are discussed in relation to their specific contributions and a tool in the form of a card game is described to structure the design debates and ensure a comprehensive decision-making process. The game is based on design patterns for school architecture as found in the literature and is adapted to a specific reality: State-run public schools in São Paulo, Brazil. In this State, school buildings are managed by a foundation called Fundação para o Desenvolvimento da Educação (FDE). FDE supervises new designs and is responsible for the maintenance of ~ 5000 schools. The design process of this context was characterised with a recommendation to improve the programming phase. Card games can create a common environment, to which all participants can relate and, therefore, can contribute to briefing debates on an equal footing. The cards of the game described here represent essential school design themes as found in the literature. The tool was tested with stakeholder groups and with architecture students. In both situations, the game proved to be an efficient tool to stimulate school design discussions and to aid in the elaboration of a rich, focused and thoughtful architectural program for a given demand. The game organizes the debates and all participants are shown to spontaneously contribute each in his own field of expertise to the decision-making process. Although the game was specifically based on a local school design process it shows potential for other contexts because the content is based on known facts, needs and concepts of school design, which are global. A structured briefing phase with diverse stakeholder participation can enrich the design process and consequently improve the quality of school buildings.Keywords: architectural program, design process, school building design, stakeholder
Procedia PDF Downloads 405361 In vitro Evaluation of Immunogenic Properties of Oral Application of Rabies Virus Surface Glycoprotein Antigen Conjugated to Beta-Glucan Nanoparticles in a Mouse Model
Authors: Narges Bahmanyar, Masoud Ghorbani
Abstract:
Rabies is caused by several species of the genus Lyssavirus in the Rhabdoviridae family. The disease is deadly encephalitis transmitted from warm-blooded animals to humans, and domestic and wild carnivores play the most crucial role in its transmission. The prevalence of rabies in poor areas of developing salinities is constantly posed as a global threat to public health. According to the World Health Organization, approximately 60,000 people die yearly from rabies. Of these, 60% of deaths are related to the Middle East. Although rabies encephalitis is incurable to date, awareness of the disease and the use of vaccines is the best way to combat the disease. Although effective vaccines are available, there is a high cost involved in vaccine production and management to combat rabies. Increasing the prevalence and discovery of new strains of rabies virus requires the need for safe, effective, and as inexpensive vaccines as possible. One of the approaches considered to achieve the quality and quantity expressed through the manufacture of recombinant types of rabies vaccine. Currently, livestock rabies vaccines are used only in inactivated or live attenuated vaccines, the process of inactivation of which pays attention to considerations. The rabies virus contains a negatively polarized single-stranded RNA genome that encodes the five major structural genes (N, P, M, G, L) from '3 to '5 . Rabies virus glycoprotein G, the major antigen, can produce the virus-neutralizing antibody. N-antigen is another candidate for developing recombinant vaccines. However, because it is within the RNP complex of the virus, the possibility of genetic diversity based on different geographical locations is very high. Glycoprotein G is structurally and antigenically more protected than other genes. Protection at the level of its nucleotide sequence is about 90% and at the amino acid level is 96%. Recombinant vaccines, consisting of a pathogenic subunit, contain fragments of the protein or polysaccharide of the pathogen that have been carefully studied to determine which of these molecules elicits a stronger and more effective immune response. These vaccines minimize the risk of side effects by limiting the immune system's access to the pathogen. Such vaccines are relatively inexpensive, easy to produce, and more stable than vaccines containing viruses or whole bacteria. The problem with these vaccines is that the pathogenic subunits may elicit a weak immune response in the body or may be destroyed before they reach the immune cells, which requires nanoparticles to overcome. Suitable for use as an adjuvant. Among these, biodegradable nanoparticles with functional levels are good candidates as adjuvants for the vaccine. In this study, we intend to use beta-glucan nanoparticles as adjuvants. The surface glycoprotein of the rabies virus (G) is responsible for identifying and binding the virus to the target cell. This glycoprotein is the major protein in the structure of the virus and induces an antibody response in the host. In this study, we intend to use rabies virus surface glycoprotein conjugated with beta-glucan nanoparticles to produce vaccines.Keywords: rabies, vaccines, beta glucan, nanoprticles, adjuvant, recombinant protein
Procedia PDF Downloads 17360 Estimation of State of Charge, State of Health and Power Status for the Li-Ion Battery On-Board Vehicle
Authors: S. Sabatino, V. Calderaro, V. Galdi, G. Graber, L. Ippolito
Abstract:
Climate change is a rapidly growing global threat caused mainly by increased emissions of carbon dioxide (CO₂) into the atmosphere. These emissions come from multiple sources, including industry, power generation, and the transport sector. The need to tackle climate change and reduce CO₂ emissions is indisputable. A crucial solution to achieving decarbonization in the transport sector is the adoption of electric vehicles (EVs). These vehicles use lithium (Li-Ion) batteries as an energy source, making them extremely efficient and with low direct emissions. However, Li-Ion batteries are not without problems, including the risk of overheating and performance degradation. To ensure its safety and longevity, it is essential to use a battery management system (BMS). The BMS constantly monitors battery status, adjusts temperature and cell balance, ensuring optimal performance and preventing dangerous situations. From the monitoring carried out, it is also able to optimally manage the battery to increase its life. Among the parameters monitored by the BMS, the main ones are State of Charge (SoC), State of Health (SoH), and State of Power (SoP). The evaluation of these parameters can be carried out in two ways: offline, using benchtop batteries tested in the laboratory, or online, using batteries installed in moving vehicles. Online estimation is the preferred approach, as it relies on capturing real-time data from batteries while operating in real-life situations, such as in everyday EV use. Actual battery usage conditions are highly variable. Moving vehicles are exposed to a wide range of factors, including temperature variations, different driving styles, and complex charge/discharge cycles. This variability is difficult to replicate in a controlled laboratory environment and can greatly affect performance and battery life. Online estimation captures this variety of conditions, providing a more accurate assessment of battery behavior in real-world situations. In this article, a hybrid approach based on a neural network and a statistical method for real-time estimation of SoC, SoH, and SoP parameters of interest is proposed. These parameters are estimated from the analysis of a one-day driving profile of an electric vehicle, assumed to be divided into the following four phases: (i) Partial discharge (SoC 100% - SoC 50%), (ii) Partial discharge (SoC 50% - SoC 80%), (iii) Deep Discharge (SoC 80% - SoC 30%) (iv) Full charge (SoC 30% - SoC 100%). The neural network predicts the values of ohmic resistance and incremental capacity, while the statistical method is used to estimate the parameters of interest. This reduces the complexity of the model and improves its prediction accuracy. The effectiveness of the proposed model is evaluated by analyzing its performance in terms of square mean error (RMSE) and percentage error (MAPE) and comparing it with the reference method found in the literature.Keywords: electric vehicle, Li-Ion battery, BMS, state-of-charge, state-of-health, state-of-power, artificial neural networks
Procedia PDF Downloads 67