Search results for: three angle complex rotation
1948 A Methodology to Integrate Data in the Company Based on the Semantic Standard in the Context of Industry 4.0
Authors: Chang Qin, Daham Mustafa, Abderrahmane Khiat, Pierre Bienert, Paulo Zanini
Abstract:
Nowadays, companies are facing lots of challenges in the process of digital transformation, which can be a complex and costly undertaking. Digital transformation involves the collection and analysis of large amounts of data, which can create challenges around data management and governance. Furthermore, it is also challenged to integrate data from multiple systems and technologies. Although with these pains, companies are still pursuing digitalization because by embracing advanced technologies, companies can improve efficiency, quality, decision-making, and customer experience while also creating different business models and revenue streams. In this paper, the issue that data is stored in data silos with different schema and structures is focused. The conventional approaches to addressing this issue involve utilizing data warehousing, data integration tools, data standardization, and business intelligence tools. However, these approaches primarily focus on the grammar and structure of the data and neglect the importance of semantic modeling and semantic standardization, which are essential for achieving data interoperability. In this session, the challenge of data silos in Industry 4.0 is addressed by developing a semantic modeling approach compliant with Asset Administration Shell (AAS) models as an efficient standard for communication in Industry 4.0. The paper highlights how our approach can facilitate the data mapping process and semantic lifting according to existing industry standards such as ECLASS and other industrial dictionaries. It also incorporates the Asset Administration Shell technology to model and map the company’s data and utilize a knowledge graph for data storage and exploration.Keywords: data interoperability in industry 4.0, digital integration, industrial dictionary, semantic modeling
Procedia PDF Downloads 941947 Graph Neural Network-Based Classification for Disease Prediction in Health Care Heterogeneous Data Structures of Electronic Health Record
Authors: Raghavi C. Janaswamy
Abstract:
In the healthcare sector, heterogenous data elements such as patients, diagnosis, symptoms, conditions, observation text from physician notes, and prescriptions form the essentials of the Electronic Health Record (EHR). The data in the form of clear text and images are stored or processed in a relational format in most systems. However, the intrinsic structure restrictions and complex joins of relational databases limit the widespread utility. In this regard, the design and development of realistic mapping and deep connections as real-time objects offer unparallel advantages. Herein, a graph neural network-based classification of EHR data has been developed. The patient conditions have been predicted as a node classification task using a graph-based open source EHR data, Synthea Database, stored in Tigergraph. The Synthea DB dataset is leveraged due to its closer representation of the real-time data and being voluminous. The graph model is built from the EHR heterogeneous data using python modules, namely, pyTigerGraph to get nodes and edges from the Tigergraph database, PyTorch to tensorize the nodes and edges, PyTorch-Geometric (PyG) to train the Graph Neural Network (GNN) and adopt the self-supervised learning techniques with the AutoEncoders to generate the node embeddings and eventually perform the node classifications using the node embeddings. The model predicts patient conditions ranging from common to rare situations. The outcome is deemed to open up opportunities for data querying toward better predictions and accuracy.Keywords: electronic health record, graph neural network, heterogeneous data, prediction
Procedia PDF Downloads 871946 An Ensemble System of Classifiers for Computer-Aided Volcano Monitoring
Authors: Flavio Cannavo
Abstract:
Continuous evaluation of the status of potentially hazardous volcanos plays a key role for civil protection purposes. The importance of monitoring volcanic activity, especially for energetic paroxysms that usually come with tephra emissions, is crucial not only for exposures to the local population but also for airline traffic. Presently, real-time surveillance of most volcanoes worldwide is essentially delegated to one or more human experts in volcanology, who interpret data coming from different kind of monitoring networks. Unfavorably, the high nonlinearity of the complex and coupled volcanic dynamics leads to a large variety of different volcanic behaviors. Moreover, continuously measured parameters (e.g. seismic, deformation, infrasonic and geochemical signals) are often not able to fully explain the ongoing phenomenon, thus making the fast volcano state assessment a very puzzling task for the personnel on duty at the control rooms. With the aim of aiding the personnel on duty in volcano surveillance, here we introduce a system based on an ensemble of data-driven classifiers to infer automatically the ongoing volcano status from all the available different kind of measurements. The system consists of a heterogeneous set of independent classifiers, each one built with its own data and algorithm. Each classifier gives an output about the volcanic status. The ensemble technique allows weighting the single classifier output to combine all the classifications into a single status that maximizes the performance. We tested the model on the Mt. Etna (Italy) case study by considering a long record of multivariate data from 2011 to 2015 and cross-validated it. Results indicate that the proposed model is effective and of great power for decision-making purposes.Keywords: Bayesian networks, expert system, mount Etna, volcano monitoring
Procedia PDF Downloads 2471945 Consumer Preferences towards Sorbets: A Questionnaire Study
Authors: Kinga Topolska, Agnieszka Filipiak-Florkiewicz, Adam Florkiewicz, Daria Chechelska, Iwona Cieślik, Ewa Cieślik
Abstract:
Food choice is a complex human behaviour, that is influenced by many interrelating factors. It is important to understand what consumers really want to eat. Nowadays, the growing popularity of frozen desserts is observed. Among them, sorbets are of the special interest. They are made primarily of fruit juice or fruit purée, water and sugar. A questionnaire study was done to evaluate the consumer preferences towards sorbets. A hundred respondents were included in the study. The respondents answered the questions concerning, inter alia, on the favourite taste of sorbets, additional ingredients (pieces of fruits, nuts etc.), the reason for choosing the product and also the opinion about potential purchasing or not the new product. Women, more frequently than men, indicated strawberry sorbet or the product on the basis of citrus fruits as a favourite one. In turn, 14% of men chose the apple taste. Pieces of chocolate were chosen by most of respondents. Men, more often than women, regarded raisins, alcohol and nuts as the most desirable additional ingredients of sorbets. The candied fruits and spices were indicated more frequently by women. Most of respondents indicated the taste as the major reason for sorbet buying. In turn, for 20% women the most important determinant was the care for their figure. It was observed that more than a half of women regarded sorbets as healthier than traditional ice creams. Answering the question: 'If you had the opportunity to try a new sorbet, containing the ingredient with proven healthy properties, would you buy it?', significantly more men than women answered 'yes, because I like novelty'. Meanwhile, for 14% respondents (independently of gender) it would be only a publicity stunt. Knowing what the consumers desire in selecting a product, is a very important information to design and offer them a new one. Sorbets could be an interesting alternative to ice creams.Keywords: consumer, preferences, sorbets, questionnaire study
Procedia PDF Downloads 2861944 Binding Mechanism of Synthesized 5β-Dihydrocortisol and 5β-Dihydrocortisol Acetate with Human Serum Albumin to Understand Their Role in Breast Cancer
Authors: Monika Kallubai, Shreya Dubey, Rajagopal Subramanyam
Abstract:
Our study is all about the biological interactions of synthesized 5β-dihydrocortisol (Dhc) and 5β-dihydrocortisol acetate (DhcA) molecules with carrier protein Human Serum Albumin (HSA). The cytotoxic study was performed on breast cancer cell line (MCF-7) normal human embryonic kidney cell line (HEK293), the IC50 values for MCF-7 cells were 28 and 25 µM, respectively, whereas no toxicity in terms of cell viability was observed with HEK293 cell line. The further experiment proved that Dhc and DhcA induced 35.6% and 37.7% early apoptotic cells and 2.5%, 2.9% late apoptotic cells respectively. Morphological observation of cell death through TUNEL assay revealed that Dhc and DhcA induced apoptosis in MCF-7 cells. The complexes of HSA–Dhc and HSA–DhcA were observed as static quenching, and the binding constants (K) was 4.7±0.03×104 M-1 and 3.9±0.05×104 M-1, and their binding free energies were found to be -6.4 and -6.16 kcal/mol, respectively. The displacement studies confirmed that lidocaine 1.4±0.05×104 M-1 replaced Dhc, and phenylbutazone 1.5±0.05×104 M-1 replaced by DhcA, which explains domain I and domain II are the binding sites for Dhc and DhcA. Further, CD results revealed that the secondary structure of HSA was altered in the presence of Dhc and DhcA. Furthermore, the atomic force microscopy and transmission electron microscopy showed that the dimensions like height and molecular sizes of the HSA–Dhc and HSA–DhcA complex were larger compared to HSA alone. Detailed analysis through molecular dynamics simulations also supported the greater stability of HSA–Dhc and HSA–DhcA complexes, and root-mean-square-fluctuation interpreted the binding site of Dhc as domain IB and domain IIA for DhcA. This information is valuable for the further development of steroid derivatives with improved pharmacological significance as novel anti-cancer drugs.Keywords: apoptosis, dihydrocortisol, fluorescence quenching, protein conformations
Procedia PDF Downloads 1321943 Improvement of Artemisinin Production by P. indica in Hairy Root Cultures of A. annua L.
Authors: Seema Ahlawat, Parul Saxena, Malik Zainul Abdin
Abstract:
Malaria is a major health problem in many developing countries. The parasite responsible for the vast majority of fatal malaria infections is Plasmodium falciparum. Unfortunately, most Plasmodium strains including P. falciparum have become resistant to most of the antimalarials including chloroquine, mefloquine, etc. To combat this problem, WHO has recommended the use of artemisinin and its derivatives in artemisinin based combination therapy (ACT). Due to its current use in artemisinin based-combination therapy (ACT), its global demand is increasing continuously. But, the relatively low yield of artemisinin in A. annua L. plants and unavailability of economically viable synthetic protocols are the major bottlenecks for its commercial production and clinical use. Chemical synthesis of artemisinin is also very complex and uneconomical. The hairy root system, using the Agrobacterium rhizogenes LBA 9402 strain to enhance the production of artemisinin in A. annua L., is developed in our laboratory. The transgenic nature of hairy root lines and the copy number of trans gene (rol B) were confirmed using PCR and Southern Blot analyses, respectively. The effect of different concentrations of Piriformospora indica on artemisinin production in hairy root cultures were evaluated. 3% P. indica has resulted 1.97 times increase in artemisinin production in comparison to control cultures. The effects of P. indica on artemisinin production was positively correlated with regulatory genes of MVA, MEP and artemisinin biosynthetic pathways, viz. hmgr, ads, cyp71av1, aldh1, dxs, dxr and dbr2 in hairy root cultures of A. annua L. Mass scale cultivation of A. annua L. hairy roots by plant tissue culture technology may be an alternative route for production of artemisinin. A comprehensive investigation of the hairy root system of A. annua L. would help in developing a viable process for the production of artemisinin. The efficiency of the scaling up systems still needs optimization before industrial exploitation becomes viable.Keywords: A. annua L., artemisinin, hairy root cultures, malaria
Procedia PDF Downloads 4161942 Application of RayMan Model in Quantifying the Impacts of the Built Environment and Surface Properties on Surrounding Temperature
Authors: Maryam Karimi, Rouzbeh Nazari
Abstract:
Introduction: Understanding thermal distribution in the micro-urban climate has now been necessary for urban planners or designers due to the impact of complex micro-scale features of Urban Heat Island (UHI) on the built environment and public health. Hence, understanding the interrelation between urban components and thermal pattern can assist planners in the proper addition of vegetation to build-environment, which can minimize the UHI impact. To characterize the need for urban green infrastructure (UGI) through better urban planning, this study proposes the use of RayMan model to measure the impact of air quality and increased temperature based on urban morphology in the selected metropolitan cities. This project will measure the impact of build environment for urban and regional planning using human biometeorological evaluations (Tmrt). Methods: We utilized the RayMan model to estimate the Tmrt in an urban environment incorporating location and height of buildings and trees as a supplemental tool in urban planning and street design. The estimated Tmrt value will be compared with existing surface and air temperature data to find the actual temperature felt by pedestrians. Results: Our current results suggest a strong relationship between sky-view factor (SVF) and increased surface temperature in megacities based on current urban morphology. Conclusion: This study will help with Quantifying the impacts of the built environment and surface properties on surrounding temperature, identifying priority urban neighborhoods by analyzing Tmrt and air quality data at the pedestrian level, and characterizing the need for urban green infrastructure cooling potential.Keywords: built environment, urban planning, urban cooling, extreme heat
Procedia PDF Downloads 1241941 Investigating Elements of Identity of Traditional Neighborhoods in Isfahan and Using These Elements in the Design of Modern Neighborhoods
Authors: Saman Keshavarzi
Abstract:
The process of planning, designing and building neighborhoods is a complex and multidimensional part of urban planning. Understanding the elements that give a neighborhood a sense of identity can lead to successful city planning and result in a cohesive and functional community where people feel a sense of belonging. These factors are important in ensuring that the needs of the urban population are met to live in a safe, pleasant and healthy society. This research paper aims to identify the elements of the identity of traditional neighborhoods in Isfahan and analyzes ways of using these elements in the design of modern neighborhoods to increase social interaction between communities and cultural reunification of people. The neighborhood of Jolfa in Isfahan has a unique socio-cultural identity as it dates back to the Safavid Dynasty of the 16th century, and most of its inhabitants are Christian Armenians of a religious minority. The elements of the identity of Jolfa were analyzed through the following research methods: field observations, distribution of questionnaires and qualitative analysis. The basic methodology that was used to further understand the Jolfa neighborhood and deconstruct the identity image that residents associate with their respective neighborhoods was a qualitative research method. This was done through utilizing questionnaires that respondents had to fill out in response to a series of research questions. From collecting these qualitative data, the major finding was that traditional neighborhoods that have elements of identity embedded in them are seen to have closer-knit communities whose residents have strong societal ties. This area of study in urban planning is vital to ensuring that new neighborhoods are built with concepts of social cohesion, community and inclusion in mind as they are what lead to strong, connected, and prosperous societies.Keywords: development, housing, identity, neighborhood, policy, urbanization
Procedia PDF Downloads 1741940 Stabilization of Pb, Cr, Cd, Cu and Zn in Solid Waste and Sludge Pyrolysis by Modified Vermiculite
Authors: Yuxuan Yang, Zhaoping Zhong
Abstract:
Municipal solid waste and sludge are important sources of waste energy and their proper disposal is of great importance. Pyrolysis can fully decompose solid wastes and sludge, and the pyrolysis products (charcoal, oil and gas) have important recovery values. Due to the complex composition of solid wastes and sludge, the pyrolysis process at high temperatures is prone to heavy metal emissions, which are harmful to humans and the environment and reduce the safety of pyrolysis products. In this paper, heavy metal emissions during pyrolysis of municipal sewage sludge, paper mill sludge, municipal domestic waste, and aged refuse at 450-650°C were investigated and the emissions and hazards of heavy metals (Pb, Cr, Cd, Cu and Zn) were effectively reduced by adding modified vermiculite as an additive. The vermiculite was modified by intercalation with cetyltrimethylammonium bromide, which resulted in more than twice the original layer spacing of the vermiculite. Afterward, the interpolated vermiculite was made into vermiculite flakes by exfoliation modification. After that, the expansion rate of vermiculite flakes was increased by Mg2+ modification and thermal activation. The expanded vermiculite flakes were acidified to improve the textural characteristics of the vermiculite. The modified vermiculite was analysed by XRD, FT-IR, BET and SEM to clarify the modification effect. The incorporation of modified vermiculite resulted in more than 80% retention of all heavy metals at 450°C. Cr, Cu and Zn were better retained than Pb and Cd. The incorporation of modified vermiculite effectively reduced the risk of heavy metals, and all risks were low for Pb, Cr, Cu and Zn. The toxicity of all heavy metals was greatly reduced by the incorporation of modified vermiculite and the morphology of heavy metals was transformed from Exchangeable and acid-soluble (F1) and Reducible (F2) to Oxidizable (F3) and Residual (F4). In addition, the increase in temperature favored the stabilization of heavy metal forms. This study provides a new insight into the cleaner use of energy and the safe management of solid waste.Keywords: heavy metal, pyrolysis, vermiculite, solid waste
Procedia PDF Downloads 701939 Reliability Qualification Test Plan Derivation Method for Weibull Distributed Products
Authors: Ping Jiang, Yunyan Xing, Dian Zhang, Bo Guo
Abstract:
The reliability qualification test (RQT) is widely used in product development to qualify whether the product meets predetermined reliability requirements, which are mainly described in terms of reliability indices, for example, MTBF (Mean Time Between Failures). It is widely exercised in product development. In engineering practices, RQT plans are mandatorily referred to standards, such as MIL-STD-781 or GJB899A-2009. But these conventional RQT plans in standards are not preferred, as the test plans often require long test times or have high risks for both producer and consumer due to the fact that the methods in the standards only use the test data of the product itself. And the standards usually assume that the product is exponentially distributed, which is not suitable for a complex product other than electronics. So it is desirable to develop an RQT plan derivation method that safely shortens test time while keeping the two risks under control. To meet this end, for the product whose lifetime follows Weibull distribution, an RQT plan derivation method is developed. The merit of the method is that expert judgment is taken into account. This is implemented by applying the Bayesian method, which translates the expert judgment into prior information on product reliability. Then producer’s risk and the consumer’s risk are calculated accordingly. The procedures to derive RQT plans are also proposed in this paper. As extra information and expert judgment are added to the derivation, the derived test plans have the potential to shorten the required test time and have satisfactory low risks for both producer and consumer, compared with conventional test plans. A case study is provided to prove that when using expert judgment in deriving product test plans, the proposed method is capable of finding ideal test plans that not only reduce the two risks but also shorten the required test time as well.Keywords: expert judgment, reliability qualification test, test plan derivation, producer’s risk, consumer’s risk
Procedia PDF Downloads 1411938 Relevance of Reliability Approaches to Predict Mould Growth in Biobased Building Materials
Authors: Lucile Soudani, Hervé Illy, Rémi Bouchié
Abstract:
Mould growth in living environments has been widely reported for decades all throughout the world. A higher level of moisture in housings can lead to building degradation, chemical component emissions from construction materials as well as enhancing mould growth within the envelope elements or on the internal surfaces. Moreover, a significant number of studies have highlighted the link between mould presence and the prevalence of respiratory diseases. In recent years, the proportion of biobased materials used in construction has been increasing, as seen as an effective lever to reduce the environmental impact of the building sector. Besides, bio-based materials are also hygroscopic materials: when in contact with the wet air of a surrounding environment, their porous structures enable a better capture of water molecules, thus providing a more suitable background for mould growth. Many studies have been conducted to develop reliable models to be able to predict mould appearance, growth, and decay over many building materials and external exposures. Some of them require information about temperature and/or relative humidity, exposure times, material sensitivities, etc. Nevertheless, several studies have highlighted a large disparity between predictions and actual mould growth in experimental settings as well as in occupied buildings. The difficulty of considering the influence of all parameters appears to be the most challenging issue. As many complex phenomena take place simultaneously, a preliminary study has been carried out to evaluate the feasibility to sadopt a reliability approach rather than a deterministic approach. Both epistemic and random uncertainties were identified specifically for the prediction of mould appearance and growth. Several studies published in the literature were selected and analysed, from the agri-food or automotive sectors, as the deployed methodology appeared promising.Keywords: bio-based materials, mould growth, numerical prediction, reliability approach
Procedia PDF Downloads 481937 Practical Ways to Acquire the Arabic Language through Electronic Means
Authors: Hondozi Jahja
Abstract:
There is an obvious need to learn Arabic language and teach it to other speakers through the new curricula. The idea is to bridge the gap between theory and practice. To that end, we have sought to offer some means of help to master the Arabic language, in addition to our efforts to apply these means, enriching the culture of the student and develop his vocabulary. There is no doubt that taking care of the practical aspect of the grammar was our constant goal, and this particular aspect is what builds the student’s positive values and refine his taste and develop his language. In addressing these issues, we have adopted a school-based approach based primarily on the active and positive participation of the student. The theoretical linguistic issues - in our opinion - are not a primary goal, but the goal is to be used them by students through speaking and applying them. Among the objectives of this research is to establish the basic language skills of the students using new means that help the student to acquire these skills and apply them in various subjects of interest in his progress and development. Unfortunately, some of our students consider the grammar as ‘difficult’, ‘complex’ and ‘heavy’ in itself. This is one of the obstacles that stand in the way of their desired results. As a consequence, they end up talking – mumbling - about the difficulties they face in applying those rules. Therefore, some of our students finish their university studies and are unable to express what they feel using language correctly. For this purpose, we have sought in this research to follow a new integrated approach, which is to study the grammar of the language through modern means of the consolidation of the principle of functional language, and that the rule implies to control tongues and linguistic expressions properly. This research is a result of a practical experience as a teacher of Arabic language for non-native speakers at the ‘Hassan Pristina’ University, located in Pristina, the capital of Kosovo and at the Qatar Training Center since its establishment in 2012.Keywords: arabic, applied methods, acquire, learning
Procedia PDF Downloads 1601936 [Keynote Talk]: Caught in the Tractorbeam of Larger Influences: The Filtration of Innovation in Education Technology Design
Authors: Justin D. Olmanson, Fitsum Abebe, Valerie Jones, Eric Kyle, Xianquan Liu, Katherine Robbins, Guieswende Rouamba
Abstract:
The history of education technology--and designing, adapting, and adopting technologies for use in educational spaces--is nuanced, complex, and dynamic. Yet, despite a range of continually emerging technologies, the design and development process often yields results that appear quite similar in terms of affordances and interactions. Through this study we (1) verify the extent to which designs have been constrained, (2) consider what might account for it, and (3) offer a way forward in terms of how we might identify and strategically sidestep these influences--thereby increasing the diversity of our designs with a given technology or within a particular learning domain. We begin our inquiry from the perspective that a host of co-influencing elements, fields, and meta narratives converge on the education technology design process to exert a tangible, often homogenizing effect on the resultant designs. We identify several elements that influence design in often implicit or unquestioned ways (e.g. curriculum, learning theory, economics, learning context, pedagogy), we describe our methodology for identifying the elemental positionality embedded in a design, we direct our analysis to a particular subset of technologies in the field of literacy, and unpack our findings. Our early analysis suggests that the majority of education technologies designed for use/used in US public schools are heavily influenced by a handful of mainstream theories and meta narratives. These findings have implications for how we approach the education technology design process--which we use to suggest alternative methods for designing/ developing with emerging technologies. Our analytical process and re conceptualized design process hold the potential to diversify the ways emerging and established technologies get incorporated into our designs.Keywords: curriculum, design, innovation, meta narratives
Procedia PDF Downloads 5121935 Predicting Ecological Impacts of Sea-Level Change on Coastal Conservation Areas in India
Authors: Mohammad Zafar-ul Islam, Shaily Menon, Xingong Li, A. Townsend Peterson
Abstract:
In addition to the mounting empirical data on direct implications of climate change for natural and human systems, evidence is increasing for other, indirect climate change phenomena such as sea-level rise. Rising sea levels and associated marine intrusion into terrestrial environments are predicted to be among the most serious eventual consequences of climate change. The many complex and interacting factors affecting sea levels create considerable uncertainty in sea-level rise projections: conservative estimates are on the order of 0.5-1.0 m globally, while other estimates are much higher, approaching 6 m. Marine intrusion associated with 1– 6 m sea-level rise will impact species and habitats in coastal ecosystems severely. Examining areas most vulnerable to such impacts may allow design of appropriate adaptation and mitigation strategies. We present an overview of potential effects of 1 and 6 m sea level rise for coastal conservation areas in the Indian Subcontinent. In particular, we examine the projected magnitude of areal losses in relevant biogeographic zones, ecoregions, protected areas (PAs), and Important Bird Areas (IBAs). In addition, we provide a more detailed and quantitative analysis of likely effects of marine intrusion on 22 coastal PAs and IBAs that provide critical habitat for birds in the form of breeding areas, migratory stopover sites, and overwintering habitats. Several coastal PAs and IBAs are predicted to experience higher than 50% losses to marine intrusion. We explore consequences of such inundation levels on species and habitat in these areas.Keywords: sea-level change, coastal inundation, marine intrusion, biogeographic zones, ecoregions, protected areas, important bird areas, adaptation, mitigation
Procedia PDF Downloads 2581934 Comparison of the Results of a Parkinson’s Holter Monitor with Patient Diaries, in Real Conditions of Use: A Sub-Analysis of the MoMoPa-EC Clinical Trial
Authors: Alejandro Rodríguez-Molinero, Carlos Pérez-López, Jorge Hernández-Vara, Àngels Bayes-Rusiñol, Juan Carlos Martínez-Castrillo, David A. Pérez-Martínez
Abstract:
Background: Monitoring motor symptoms in Parkinson's patients is often a complex and time-consuming task for clinicians, as Hauser's diaries are often poorly completed by patients. Recently, new automatic devices (Parkinson's holter: STAT-ON®) have been developed capable of monitoring patients' motor fluctuations. The MoMoPa-EC clinical trial (NCT04176302) investigates which of the two methods produces better clinical results. In this sub-analysis, the concordance between both methods is analyzed. Methods: In the MoMoPa-EC clinical trial, 164 patients with moderate-severe Parkinson's disease and at least two hours a day of Off will be included. At the time of patient recruitment, all of them completed a seven-day motor fluctuation diary at home (Hauser’s diary) while wearing the Parkinson's holter. In this sub-analysis, 71 patients with complete data for the purpose of this comparison were included. The intraclass correlation coefficient was calculated between the patient diary entries and the Parkinson's holter data in terms of time On, Off, and time with dyskinesias. Results: The intra-class correlation coefficient of both methods was 0.57 (95% CI: 0.3-0.74) for daily time in Off (%), 0.48 (95% CI: 0.14-0.68) for daily time in On (%), and 0.37 (95% CI %: -0.04-0.62) for daily time with dyskinesias (%). Conclusions: Both methods have a moderate agreement with each other. We will have to wait for the results of the MoMoPa-EC project to estimate which of them has the greatest clinical benefits. Acknowledgment: This work is supported by AbbVie S.L.U, the Instituto de Salud Carlos III [DTS17/00195], and the European Fund for Regional Development, 'A way to make Europe'.Keywords: Parkinson, sensor, motor fluctuations, dyskinesia
Procedia PDF Downloads 2341933 Distribution System Modelling: A Holistic Approach for Harmonic Studies
Authors: Stanislav Babaev, Vladimir Cuk, Sjef Cobben, Jan Desmet
Abstract:
The procedures for performing harmonic studies for medium-voltage distribution feeders have become relatively mature topics since the early 1980s. The efforts of various electric power engineers and researchers were mainly focused on handling large harmonic non-linear loads connected scarcely at several buses of medium-voltage feeders. In order to assess the impact of these loads on the voltage quality of the distribution system, specific modeling and simulation strategies were proposed. These methodologies could deliver a reasonable estimation accuracy given the requirements of least computational efforts and reduced complexity. To uphold these requirements, certain analysis assumptions have been made, which became de facto standards for establishing guidelines for harmonic analysis. Among others, typical assumptions include balanced conditions of the study and the negligible impact of impedance frequency characteristics of various power system components. In latter, skin and proximity effects are usually omitted, and resistance and reactance values are modeled based on the theoretical equations. Further, the simplifications of the modelling routine have led to the commonly accepted practice of neglecting phase angle diversity effects. This is mainly associated with developed load models, which only in a handful of cases are representing the complete harmonic behavior of a certain device as well as accounting on the harmonic interaction between grid harmonic voltages and harmonic currents. While these modelling practices were proven to be reasonably effective for medium-voltage levels, similar approaches have been adopted for low-voltage distribution systems. Given modern conditions and massive increase in usage of residential electronic devices, recent and ongoing boom of electric vehicles, and large-scale installing of distributed solar power, the harmonics in current low-voltage grids are characterized by high degree of variability and demonstrate sufficient diversity leading to a certain level of cancellation effects. It is obvious, that new modelling algorithms overcoming previously made assumptions have to be accepted. In this work, a simulation approach aimed to deal with some of the typical assumptions is proposed. A practical low-voltage feeder is modeled in PowerFactory. In order to demonstrate the importance of diversity effect and harmonic interaction, previously developed measurement-based models of photovoltaic inverter and battery charger are used as loads. The Python-based script aiming to supply varying voltage background distortion profile and the associated current harmonic response of loads is used as the core of unbalanced simulation. Furthermore, the impact of uncertainty of feeder frequency-impedance characteristics on total harmonic distortion levels is shown along with scenarios involving linear resistive loads, which further alter the impedance of the system. The comparative analysis demonstrates sufficient differences with cases when all the assumptions are in place, and results indicate that new modelling and simulation procedures need to be adopted for low-voltage distribution systems with high penetration of non-linear loads and renewable generation.Keywords: electric power system, harmonic distortion, power quality, public low-voltage network, harmonic modelling
Procedia PDF Downloads 1621932 Advancing Urban Sustainability through Data-Driven Machine Learning Solutions
Authors: Nasim Eslamirad, Mahdi Rasoulinezhad, Francesco De Luca, Sadok Ben Yahia, Kimmo Sakari Lylykangas, Francesco Pilla
Abstract:
With the ongoing urbanization, cities face increasing environmental challenges impacting human well-being. To tackle these issues, data-driven approaches in urban analysis have gained prominence, leveraging urban data to promote sustainability. Integrating Machine Learning techniques enables researchers to analyze and predict complex environmental phenomena like Urban Heat Island occurrences in urban areas. This paper demonstrates the implementation of data-driven approach and interpretable Machine Learning algorithms with interpretability techniques to conduct comprehensive data analyses for sustainable urban design. The developed framework and algorithms are demonstrated for Tallinn, Estonia to develop sustainable urban strategies to mitigate urban heat waves. Geospatial data, preprocessed and labeled with UHI levels, are used to train various ML models, with Logistic Regression emerging as the best-performing model based on evaluation metrics to derive a mathematical equation representing the area with UHI or without UHI effects, providing insights into UHI occurrences based on buildings and urban features. The derived formula highlights the importance of building volume, height, area, and shape length to create an urban environment with UHI impact. The data-driven approach and derived equation inform mitigation strategies and sustainable urban development in Tallinn and offer valuable guidance for other locations with varying climates.Keywords: data-driven approach, machine learning transparent models, interpretable machine learning models, urban heat island effect
Procedia PDF Downloads 411931 Applications of Evolutionary Optimization Methods in Reinforcement Learning
Authors: Rahul Paul, Kedar Nath Das
Abstract:
The paradigm of Reinforcement Learning (RL) has become prominent in training intelligent agents to make decisions in environments that are both dynamic and uncertain. The primary objective of RL is to optimize the policy of an agent in order to maximize the cumulative reward it receives throughout a given period. Nevertheless, the process of optimization presents notable difficulties as a result of the inherent trade-off between exploration and exploitation, the presence of extensive state-action spaces, and the intricate nature of the dynamics involved. Evolutionary Optimization Methods (EOMs) have garnered considerable attention as a supplementary approach to tackle these challenges, providing distinct capabilities for optimizing RL policies and value functions. The ongoing advancement of research in both RL and EOMs presents an opportunity for significant advancements in autonomous decision-making systems. The convergence of these two fields has the potential to have a transformative impact on various domains of artificial intelligence (AI) applications. This article highlights the considerable influence of EOMs in enhancing the capabilities of RL. Taking advantage of evolutionary principles enables RL algorithms to effectively traverse extensive action spaces and discover optimal solutions within intricate environments. Moreover, this paper emphasizes the practical implementations of EOMs in the field of RL, specifically in areas such as robotic control, autonomous systems, inventory problems, and multi-agent scenarios. The article highlights the utilization of EOMs in facilitating RL agents to effectively adapt, evolve, and uncover proficient strategies for complex tasks that may pose challenges for conventional RL approaches.Keywords: machine learning, reinforcement learning, loss function, optimization techniques, evolutionary optimization methods
Procedia PDF Downloads 811930 Predictive Modeling of Bridge Conditions Using Random Forest
Authors: Miral Selim, May Haggag, Ibrahim Abotaleb
Abstract:
The aging of transportation infrastructure presents significant challenges, particularly concerning the monitoring and maintenance of bridges. This study investigates the application of Random Forest algorithms for predictive modeling of bridge conditions, utilizing data from the US National Bridge Inventory (NBI). The research is significant as it aims to improve bridge management through data-driven insights that can enhance maintenance strategies and contribute to overall safety. Random Forest is chosen for its robustness, ability to handle complex, non-linear relationships among variables, and its effectiveness in feature importance evaluation. The study begins with comprehensive data collection and cleaning, followed by the identification of key variables influencing bridge condition ratings, including age, construction materials, environmental factors, and maintenance history. Random Forest is utilized to examine the relationships between these variables and the predicted bridge conditions. The dataset is divided into training and testing subsets to evaluate the model's performance. The findings demonstrate that the Random Forest model effectively enhances the understanding of factors affecting bridge conditions. By identifying bridges at greater risk of deterioration, the model facilitates proactive maintenance strategies, which can help avoid costly repairs and minimize service disruptions. Additionally, this research underscores the value of data-driven decision-making, enabling better resource allocation to prioritize maintenance efforts where they are most necessary. In summary, this study highlights the efficiency and applicability of Random Forest in predictive modeling for bridge management. Ultimately, these findings pave the way for more resilient and proactive management of bridge systems, ensuring their longevity and reliability for future use.Keywords: data analysis, random forest, predictive modeling, bridge management
Procedia PDF Downloads 241929 Classification System for Soft Tissue Injuries of Face: Bringing Objectiveness to Injury Severity
Authors: Garg Ramneesh, Uppal Sanjeev, Mittal Rajinder, Shah Sheerin, Jain Vikas, Singla Bhupinder
Abstract:
Introduction: Despite advances in trauma care, a classification system for soft tissue injuries of the face still needs to be objectively defined. Aim: To develop a classification system for soft tissue injuries of the face; that is objective, easy to remember, reproducible, universally applicable, aids in surgical management and helps to develop a structured data that can be used for future use. Material and Methods: This classification system includes those patients that need surgical management of facial injuries. Associated underlying bony fractures have been intentionally excluded. Depending upon the severity of soft tissue injury, these can be graded from 0 to IV (O-Abrasions, I-lacerations, II-Avulsion injuries with no skin loss, III-Avulsion injuries with skin loss that would need graft or flap cover, and IV-complex injuries). Anatomically, the face has been divided into three zones (Zone 1/2/3), as per aesthetic subunits. Zone 1e stands for injury of eyebrows; Zones 2 a/b/c stand for nose, upper eyelid and lower eyelid respectively; Zones 3 a/b/c stand for upper lip, lower lip and cheek respectively. Suffices R and L stand for right or left involved side, B for presence of foreign body like glass or pellets, C for extensive contamination and D for depth which can be graded as D 1/2/3 if depth is still fat, muscle or bone respectively. I is for damage to facial nerve or parotid duct. Results and conclusions: This classification system is easy to remember, clinically applicable and would help in standardization of surgical management of soft tissue injuries of face. Certain inherent limitations of this classification system are inability to classify sutured wounds, hematomas and injuries along or against Langer’s lines.Keywords: soft tissue injuries, face, avulsion, classification
Procedia PDF Downloads 3831928 Corrosion Resistance of 17-4 Precipitation Hardenable Stainless Steel Fabricated by Selective Laser Melting
Authors: Michella Alnajjar, Frederic Christien, Krzysztof Wolski, Cedric Bosch
Abstract:
Additive manufacturing (AM) has gained more interest in the past few years because it allows 3D parts often having a complex geometry to be directly fabricated, layer by layer according to a CAD model. One of the AM techniques is the selective laser melting (SLM) which is based on powder bed fusion. In this work, the corrosion resistance of 17-4 PH steel obtained by SLM is investigated. Wrought 17-4 PH steel is a martensitic precipitation hardenable stainless steel. It is widely used in a variety of applications such as aerospace, medical and food industries, due to its high strength and relatively good corrosion resistance. However, the combined findings of X-Ray diffraction and electron backscatter diffraction (EBSD) proved that SLM-ed 17-4 PH steel has a fully ferritic microstructure, more specifically δ ferrite. The microstructure consists of coarse ferritic grains elongated along the build direction, with a pronounced solidification crystallographic texture. These results were associated with the high cooling and heating rates experienced throughout the SLM process (10⁵-10⁶ K/s) that suppressed the austenite formation and produced a 'by-passing' phenomenon of this phase during the numerous thermal cycles. Furthermore, EDS measurements revealed a uniform distribution of elements without any dendritic structure. The extremely high cooling kinetics induced a diffusionless solidification, resulting in a homogeneous elemental composition. Consequently, the corrosion properties of this steel are altered from that of conventional ones. By using electrochemical means, it was found that SLM-ed 17-4 PH is more resistant to general corrosion than the wrought steel. However, the SLM-ed material exhibits metastable pitting due to its high porosity density. In addition, the hydrogen embrittlement of SLM-ed 17-4 PH steel is investigated, and a correlation between its behavior and the observed microstructure is made.Keywords: corrosion resistance, 17-4 PH stainless steel, selective laser melting, hydrogen embrittlement
Procedia PDF Downloads 1411927 Trend Analysis of Annual Total Precipitation Data in Konya
Authors: Naci Büyükkaracığan
Abstract:
Hydroclimatic observation values are used in the planning of the project of water resources. Climate variables are the first of the values used in planning projects. At the same time, the climate system is a complex and interactive system involving the atmosphere, land surfaces, snow and bubbles, the oceans and other water structures. The amount and distribution of precipitation, which is an important climate parameter, is a limiting environmental factor for dispersed living things. Trend analysis is applied to the detection of the presence of a pattern or trend in the data set. Many trends work in different parts of the world are usually made for the determination of climate change. The detection and attribution of past trends and variability in climatic variables is essential for explaining potential future alteration resulting from anthropogenic activities. Parametric and non-parametric tests are used for determining the trends in climatic variables. In this study, trend tests were applied to annual total precipitation data obtained in period of 1972 and 2012, in the Konya Basin. Non-parametric trend tests, (Sen’s T, Spearman’s Rho, Mann-Kendal, Sen’s T trend, Wald-Wolfowitz) and parametric test (mean square) were applied to annual total precipitations of 15 stations for trend analysis. The linear slopes (change per unit time) of trends are calculated by using a non-parametric estimator developed by Sen. The beginning of trends is determined by using the Mann-Kendall rank correlation test. In addition, homogeneities in precipitation trends are tested by using a method developed by Van Belle and Hughes. As a result of tests, negative linear slopes were found in annual total precipitations in Konya.Keywords: trend analysis, precipitation, hydroclimatology, Konya
Procedia PDF Downloads 2201926 Oxidosqualene Cyclase: A Novel Inhibitor
Authors: Devadrita Dey Sarkar
Abstract:
Oxidosqualene cyclase is a membrane bound enzyme in which helps in the formation of steroid scaffold in higher organisms. In a highly selective cyclization reaction oxidosqualene cyclase forms LANOSTEROL with seven chiral centres starting from the linear substrate 2,3-oxidosqualene. In humans OSC in cholesterol biosynthesis it represents a target for the discovery of novel anticholesteraemic drugs that could complement the widely used statins. The enzyme oxidosqualene: lanosterol cyclase (OSC) represents a novel target for the treatment of hypercholesterolemia. OSC catalyzes the cyclization of the linear 2,3-monoepoxysqualene to lanosterol, the initial four-ringed sterol intermediate in the cholesterol biosynthetic pathway. OSC also catalyzes the formation of 24(S), 25-epoxycholesterol, a ligand activator of the liver X receptor. Inhibition of OSC reduces cholesterol biosynthesis and selectively enhances 24(S),25-epoxycholesterol synthesis. Through this dual mechanism, OSC inhibition decreases plasma levels of low-density lipoprotein (LDL)-cholesterol and prevents cholesterol deposition within macrophages. The recent crystallization of OSC identifies the mechanism of action for this complex enzyme, setting the stage for the design of OSC inhibitors with improved pharmacological properties for cholesterol lowering and treatment of atherosclerosis. While studying and designing the inhibitor of oxidosqulene cyclase, I worked on the pdb id of 1w6k which was the most worked on pdb id and I used several methods, techniques and softwares to identify and validate the top most molecules which could be acting as an inhibitor for oxidosqualene cyclase. Thus, by partial blockage of this enzyme, both an inhibition of lanosterol and subsequently cholesterol formation as well as a concomitant effect on HMG-CoA reductase can be achieved. Both effects complement each other and lead to an effective control of cholesterol biosynthesis. It is therefore concluded that 2,3-oxidosqualene cyclase plays a crucial role in the regulation of intracellular cholesterol homeostasis. 2,3-Oxidosqualene cyclase inhibitors offer an attractive approach for novel lipid-lowering agents.Keywords: anticholesteraemic, crystallization, statins, homeostasis
Procedia PDF Downloads 3511925 An Experimental Approach to the Influence of Tipping Points and Scientific Uncertainties in the Success of International Fisheries Management
Authors: Jules Selles
Abstract:
The Atlantic and Mediterranean bluefin tuna fishery have been considered as the archetype of an overfished and mismanaged fishery. This crisis has demonstrated the role of public awareness and the importance of the interactions between science and management about scientific uncertainties. This work aims at investigating the policy making process associated with a regional fisheries management organization. We propose a contextualized computer-based experimental approach, in order to explore the effects of key factors on the cooperation process in a complex straddling stock management setting. Namely, we analyze the effects of the introduction of a socio-economic tipping point and the uncertainty surrounding the estimation of the resource level. Our approach is based on a Gordon-Schaefer bio-economic model which explicitly represents the decision making process. Each participant plays the role of a stakeholder of ICCAT and represents a coalition of fishing nations involved in the fishery and decide unilaterally a harvest policy for the coming year. The context of the experiment induces the incentives for exploitation and collaboration to achieve common sustainable harvest plans at the Atlantic bluefin tuna stock scale. Our rigorous framework allows testing how stakeholders who plan the exploitation of a fish stock (a common pool resource) respond to two kinds of effects: i) the inclusion of a drastic shift in the management constraints (beyond a socio-economic tipping point) and ii) an increasing uncertainty in the scientific estimation of the resource level.Keywords: economic experiment, fisheries management, game theory, policy making, Atlantic Bluefin tuna
Procedia PDF Downloads 2571924 Satellite-Based Drought Monitoring in Korea: Methodologies and Merits
Authors: Joo-Heon Lee, Seo-Yeon Park, Chanyang Sur, Ho-Won Jang
Abstract:
Satellite-based remote sensing technique has been widely used in the area of drought and environmental monitoring to overcome the weakness of in-situ based monitoring. There are many advantages of remote sensing for drought watch in terms of data accessibility, monitoring resolution and types of available hydro-meteorological data including environmental areas. This study was focused on the applicability of drought monitoring based on satellite imageries by applying to the historical drought events, which had a huge impact on meteorological, agricultural, and hydrological drought. Satellite-based drought indices, the Standardized Precipitation Index (SPI) using Tropical Rainfall Measuring Mission (TRMM) and Global Precipitation Mission (GPM); Vegetation Health Index (VHI) using MODIS based Land Surface Temperature (LST), and Normalized Difference Vegetation Index (NDVI); and Scaled Drought Condition Index (SDCI) were evaluated to assess its capability to analyze the complex topography of the Korean peninsula. While the VHI was accurate when capturing moderate drought conditions in agricultural drought-damaged areas, the SDCI was relatively well monitored in hydrological drought-damaged areas. In addition, this study found correlations among various drought indices and applicability using Receiver Operating Characteristic (ROC) method, which will expand our understanding of the relationships between hydro-meteorological variables and drought events at global scale. The results of this research are expected to assist decision makers in taking timely and appropriate action in order to save millions of lives in drought-damaged areas.Keywords: drought monitoring, moderate resolution imaging spectroradiometer (MODIS), remote sensing, receiver operating characteristic (ROC)
Procedia PDF Downloads 3291923 Questioning the Relationship Between Young People and Fake News Through Their Use of Social Media
Authors: Marion Billard
Abstract:
This paper will focus on the question of the real relationship between young people and fake news. Fake news is one of today’s main issues in the world of information and communication. Social media and its democratization helped to spread false information. According to traditional beliefs, young people are more inclined to believe what they read through social media. But, the individuals concerned, think that they are more inclined to make a distinction between real and fake news. This phenomenon is due to their use of the internet and social media from an early age. During the 2016 and 2017 French and American presidential campaigns, the term fake news was in the mouth of the entire world and became a real issue in the field of information. While young people were informing themselves with newspapers or television until the beginning of the ’90s, Gen Z (meaning people born between 1997 and 2010), has always been immersed in this world of fast communication. They know how to use social media from a young age and the internet has no secret for them. Today, despite the sporadic use of traditional media, young people tend to turn to their smartphones and social networks such as Instagram or Twitter to stay abreast of the latest news. The growth of social media information led to an “ambient journalism”, giving access to an endless quantity of information. Waking up in the morning, young people will see little posts with short texts supplying the essential of the news, without, for the most, many details. As a result, impressionable people are not able to do a distinction between real media, and “junk news” or Fake News. This massive use of social media is probably explained by the inability of the youngsters to find connections between the communication of the traditional media and what they are living. The question arises if this over-confidence of the young people in their ability to distinguish between accurate and fake news would not make it more difficult for them to examine critically the information. Their relationship with media and fake news is more complex than popular opinion. Today’s young people are not the master in the quest for information, nor inherently the most impressionable public on social media.Keywords: fake news, youngsters, social media, information, generation
Procedia PDF Downloads 1631922 Effect of Aging Time and Mass Concentration on the Rheological Behavior of Vase of Dam
Authors: Hammadi Larbi
Abstract:
Water erosion, the main cause of the siltation of a dam, is a natural phenomenon governed by natural physical factors such as aggressiveness, climate change, topography, lithology, and vegetation cover. Currently, a vase from certain dams is released downstream of the dikes during devastation by hydraulic means. The vases are characterized by complex rheological behaviors: rheofluidification, yield stress, plasticity, and thixotropy. In this work, we studied the effect of the aging time of the vase in the dam and the mass concentration of the vase on the flow behavior of a vase from the Fergoug dam located in the Mascara region. In order to test the reproducibility of results, two replicates were performed for most of the experiments. The flow behavior of the vase studied as a function of storage time and mass concentration is analyzed by the Herschel Bulkey model. The increase in the aging time of the vase in the dam causes an increase in the yield stress and the consistency index of the vase. This phenomenon can be explained by the adsorption of the water by the vase and the increase in volume by swelling, which modifies the rheological parameters of the vase. The increase in the mass concentration in the vase leads to an increase in the yield stress and the consistency index as a function of the concentration. This behavior could be explained by interactions between the granules of the vase suspension. On the other hand, the increase in the aging time and the mass concentration of the vase in the dam causes a reduction in the flow index of the vase. The study also showed an exponential decrease in apparent viscosity with the increase in the aging time of the vase in the dam. If a vase is allowed to age long enough for the yield stress to be close to infinity, its apparent viscosity is also close to infinity; then the apparent viscosity also tends towards infinity; this can, for example, subsequently pose problems when dredging dams. For good dam management, it could be then deduced to reduce the dredging time of the dams as much as possible.Keywords: vase of dam, aging time, rheological behavior, yield stress, apparent viscosity, thixotropy
Procedia PDF Downloads 311921 Unraveling the Complexity of Postpartum Distress: Examining the Influence of Alexithymia, Social Support, Partners' Support, and Birth Satisfaction on Postpartum Distress among Bulgarian Mothers
Authors: Stela Doncheva
Abstract:
Postpartum distress, encompassing depressive symptoms, obsessions, and anxiety, remains a subject of significant scientific interest due to its prevalence among individuals giving birth. This critical and transformative period presents a multitude of factors that impact women's health. On the one hand, variables such as social support, satisfaction in romantic relationships, shared newborn care, and birth satisfaction directly affect the mental well-being of new mothers. On the other hand, the interplay of hormonal changes, personality characteristics, emotional difficulties, and the profound life adjustments experienced by mothers can profoundly influence their self-esteem and overall physical and emotional well-being. This paper extensively explores the factors of alexithymia, social support, partners' support, and birth satisfaction to gain deeper insights into their impact on postpartum distress. Utilizing a qualitative survey consisting of six self-reflective questionnaires, this study collects valuable data regarding the individual postpartum experiences of Bulgarian mothers. The primary objective is to enrich our understanding of the complex factors involved in the development of postpartum distress during this crucial period. The results shed light on the intricate nature of the problem and highlight the significant influence of bio-psycho-social elements. By contributing to the existing knowledge in the field, this research provides valuable implications for the development of interventions and support systems tailored to the unique needs of mothers in the postpartum period. Ultimately, this study aims to improve the overall well-being of new mothers and promote optimal maternal health during the postpartum journey.Keywords: maternal mental health, postpartum distress, postpartum depression, postnatal mothers
Procedia PDF Downloads 681920 Practicum in Preschool Teacher Education: The Role of Pedagogical Supervision for Students Professional Development
Authors: Dalila Lino
Abstract:
Practicum is a central dimension of teacher education programs. Learning how to teach is, in effect, a complex process that integrates periods of observation, experimentation, reflection, planning, and evaluation in a real context of practices, providing opportunities for prospective teachers to understand the various dimensions of education and to implement the knowledge built over the theoretical courses they have taken. At the pre-service training of early childhood teachers, specialized guidance and in particular pedagogical supervision assumes a key role in the professional development of students in training. The main goal of this study is to describe and analyze the supervision process that occurs during the practicum of preschool education master programs in Portugal. The objectives of the study are: (i) to describe the cooperative process of professional development experienced by student teachers during the practicum; (ii) to identify the strengths and weaknesses of supervision process; (iii) to identify the supervision styles used by university supervisors and cooperating teachers. The methodology used is the mix-method research and data was collected through semi-structured interviews and online questionnaires. The participants are newly graduated Portuguese early childhood teachers, university supervisors and cooperating teachers. The results reveal gaps in the specialized training of cooperating teachers and university supervisors, a large number of trainees per supervisor, which makes it difficult to support students, and those interpersonal relationships between university supervisors and students and/or cooperating teachers and students interfere in the development of the supervisory processes. The study highlights the need to invest in the specialized training of university supervisors and cooperating teachers to create better opportunities to support the professional development of prospective teachers.Keywords: mentoring, pedagogical supervision, practicum, preschool teacher education
Procedia PDF Downloads 1511919 On the Solution of Boundary Value Problems Blended with Hybrid Block Methods
Authors: Kizito Ugochukwu Nwajeri
Abstract:
This paper explores the application of hybrid block methods for solving boundary value problems (BVPs), which are prevalent in various fields such as science, engineering, and applied mathematics. Traditionally, numerical approaches such as finite difference and shooting methods, often encounter challenges related to stability and convergence, particularly in the context of complex and nonlinear BVPs. To address these challenges, we propose a hybrid block method that integrates features from both single-step and multi-step techniques. This method allows for the simultaneous computation of multiple solution points while maintaining high accuracy. Specifically, we employ a combination of polynomial interpolation and collocation strategies to derive a system of equations that captures the behavior of the solution across the entire domain. By directly incorporating boundary conditions into the formulation, we enhance the stability and convergence properties of the numerical solution. Furthermore, we introduce an adaptive step-size mechanism to optimize performance based on the local behavior of the solution. This adjustment allows the method to respond effectively to variations in solution behavior, improving both accuracy and computational efficiency. Numerical tests on a variety of boundary value problems demonstrate the effectiveness of the hybrid block methods. These tests showcase significant improvements in accuracy and computational efficiency compared to conventional methods, indicating that our approach is robust and versatile. The results suggest that this hybrid block method is suitable for a wide range of applications in real-world problems, offering a promising alternative to existing numerical techniques.Keywords: hybrid block methods, boundary value problem, polynomial interpolation, adaptive step-size control, collocation methods
Procedia PDF Downloads 37