Search results for: data-driven models
466 Exploring Professional Development Needs of Mathematics Teachers through Their Reflective Practitioner Experiences
Authors: Sevket Ceyhun Cetin, Mehmet Oren
Abstract:
According to existing educational research studies, students learn better with high teacher quality. Therefore, professional development has become a crucial way of increasing the quality of novices and veteran in-service teachers by providing support regarding content and pedagogy. To answer what makes PD effective, researchers have studied different PD models and revealed some critical elements that need to be considered, such as duration of a PD and the manner of delivery (e.g., lecture vs. engaging). Also, it has been pointed out that if PDs are prepared as one-size-fits-all, they most likely be ineffective in addressing teachers’ needs toward improving instructional quality. Instead, teachers’ voices need to be heard, and the foci of PDs should be determined based on their specific needs. Thus, this study was conducted to identify professional development needs of middle school mathematics teachers based on their self-evaluation of their performances in light of teaching standards. This study also aimed to explore whether the PD needs with respect to years of teaching experience (novice vs. veteran). These teachers had participated in a federally-funded research grant, which aimed to improve the competencies of 6-9 grade-level mathematics teachers in pedagogy and content areas. In the research project, the participants had consistently videoed their lessons throughout a school year and reflected on their performances, using Teacher Advanced Program (TAPTM) rubric, which was based on the best practices of teaching. Particularly, they scored their performances in the following areas and provided evidence as the justifications of their scores: Standards and Objectives, Presenting Instructional Content, Lesson Structure and Pacing, Activities and Materials, Academic Feedback, Grouping Students, and Questioning. The rating scale of the rubric is 1 through 5 (i.e., 1=Unsatisfactory [performance], 3=Proficient, and 5=Exemplary). For each area mentioned above, the numerical scores of 77 written reports (for 77 videoed lessons) of 24 teachers (nnovices=12 and nveteran=12) were averaged. Overall, the average score of each area was below 3 (ranging between 2.43 and 2.86); in other words, teachers judged their performances incompetent across the seven areas. In the second step of the data analysis, the lowest three areas in which novice and veteran teachers performed poorly were selected for further qualitative analysis. According to the preliminary results, the lowest three areas for the novice teachers were: Questioning, Grouping Students, and Academic Feedback. Grouping Students was also one of the lowest areas of the veteran teachers, but the other two areas for this group were: Lesson Structure & Pacing, and Standards & Objectives. Identifying in-service teachers’ needs based on their reflective practitioner experiences provides educators very crucial information that can be used to create more effective PD that improves teacher quality.Keywords: mathematics teacher, professional development, self-reflection, video data
Procedia PDF Downloads 365465 Life Cycle Assessment of Todays and Future Electricity Grid Mixes of EU27
Authors: Johannes Gantner, Michael Held, Rafael Horn, Matthias Fischer
Abstract:
At the United Nations Climate Change Conference 2015 a global agreement on the reduction of climate change was achieved stating CO₂ reduction targets for all countries. For instance, the EU targets a reduction of 40 percent in emissions by 2030 compared to 1990. In order to achieve this ambitious goal, the environmental performance of the different European electricity grid mixes is crucial. First, the electricity directly needed for everyone’s daily life (e.g. heating, plug load, mobility) and therefore a reduction of the environmental impacts of the electricity grid mix reduces the overall environmental impacts of a country. Secondly, the manufacturing of every product depends on electricity. Thereby a reduction of the environmental impacts of the electricity mix results in a further decrease of environmental impacts of every product. As a result, the implementation of the two-degree goal highly depends on the decarbonization of the European electricity mixes. Currently the production of electricity in the EU27 is based on fossil fuels and therefore bears a high GWP impact per kWh. Due to the importance of the environmental impacts of the electricity mix, not only today but also in future, within the European research projects, CommONEnergy and Senskin, time-dynamic Life Cycle Assessment models for all EU27 countries were set up. As a methodology, a combination of scenario modeling and life cycle assessment according to ISO14040 and ISO14044 was conducted. Based on EU27 trends regarding energy, transport, and buildings, the different national electricity mixes were investigated taking into account future changes such as amount of electricity generated in the country, change in electricity carriers, COP of the power plants and distribution losses, imports and exports. As results, time-dynamic environmental profiles for the electricity mixes of each country and for Europe overall were set up. Thereby for each European country, the decarbonization strategies of the electricity mix are critically investigated in order to identify decisions, that can lead to negative environmental effects, for instance on the reduction of the global warming of the electricity mix. For example, the withdrawal of the nuclear energy program in Germany and at the same time compensation of the missing energy by non-renewable energy carriers like lignite and natural gas is resulting in an increase in global warming potential of electricity grid mix. Just after two years this increase countervailed by the higher share of renewable energy carriers such as wind power and photovoltaic. Finally, as an outlook a first qualitative picture is provided, illustrating from environmental perspective, which country has the highest potential for low-carbon electricity production and therefore how investments in a connected European electricity grid could decrease the environmental impacts of the electricity mix in Europe.Keywords: electricity grid mixes, EU27 countries, environmental impacts, future trends, life cycle assessment, scenario analysis
Procedia PDF Downloads 185464 Characterization of New Sources of Maize (Zea mays L.) Resistance to Sitophilus zeamais (Coleoptera: Curculionidae) Infestation in Stored Maize
Authors: L. C. Nwosu, C. O. Adedire, M. O. Ashamo, E. O. Ogunwolu
Abstract:
The maize weevil, Sitophilus zeamais Motschulsky is a notorious pest of stored maize (Zea mays L.). The development of resistant maize varieties to manage weevils is a major breeding objective. The study investigated the parameters and mechanisms that confer resistance on a maize variety to S. zeamais infestation using twenty elite maize varieties. Detailed morphological, physical and chemical studies were conducted on whole-maize grain and the grain pericarp. Resistance was assessed at 33, 56, and 90 days post infestation using weevil mortality rate, weevil survival rate, percent grain damage, percent grain weight loss, weight of grain powder, oviposition rate and index of susceptibility as indices rated on a scale developed by the present study and on Dobie’s modified scale. Linear regression models that can predict maize grain damage in relation to the duration of storage were developed and applied. The resistant varieties identified particularly 2000 SYNEE-WSTR and TZBRELD3C5 with very high degree of resistance should be used singly or best in an integrated pest management system for the control of S. zeamais infestation in stored maize. Though increases in the physical properties of grain hardness, weight, length, and width increased varietal resistance, it was found that the bases of resistance were increased chemical attributes of phenolic acid, trypsin inhibitor and crude fibre while the bases of susceptibility were increased protein, starch, magnesium, calcium, sodium, phosphorus, manganese, iron, cobalt and zinc, the role of potassium requiring further investigation. Characters that conferred resistance on the test varieties were found distributed in the pericarp and the endosperm of the grains. Increases in grain phenolic acid, crude fibre, and trypsin inhibitor adversely and significantly affected the bionomics of the weevil on further assessment. The flat side of a maize grain at the point of penetration was significantly preferred by the weevil. Why the south area of the flattened side of a maize grain was significantly preferred by the weevil is clearly unknown, even though grain-face-type seemed to be a contributor in the study. The preference shown to the south area of the grain flat side has implications for seed viability. The study identified antibiosis, preference, antixenosis, and host evasion as the mechanisms of maize post harvest resistance to Sitophilus zeamais infestation.Keywords: maize weevil, resistant, parameters, mechanisms, preference
Procedia PDF Downloads 306463 Homeless Population Modeling and Trend Prediction Through Identifying Key Factors and Machine Learning
Authors: Shayla He
Abstract:
Background and Purpose: According to Chamie (2017), it’s estimated that no less than 150 million people, or about 2 percent of the world’s population, are homeless. The homeless population in the United States has grown rapidly in the past four decades. In New York City, the sheltered homeless population has increased from 12,830 in 1983 to 62,679 in 2020. Knowing the trend on the homeless population is crucial at helping the states and the cities make affordable housing plans, and other community service plans ahead of time to better prepare for the situation. This study utilized the data from New York City, examined the key factors associated with the homelessness, and developed systematic modeling to predict homeless populations of the future. Using the best model developed, named HP-RNN, an analysis on the homeless population change during the months of 2020 and 2021, which were impacted by the COVID-19 pandemic, was conducted. Moreover, HP-RNN was tested on the data from Seattle. Methods: The methodology involves four phases in developing robust prediction methods. Phase 1 gathered and analyzed raw data of homeless population and demographic conditions from five urban centers. Phase 2 identified the key factors that contribute to the rate of homelessness. In Phase 3, three models were built using Linear Regression, Random Forest, and Recurrent Neural Network (RNN), respectively, to predict the future trend of society's homeless population. Each model was trained and tuned based on the dataset from New York City for its accuracy measured by Mean Squared Error (MSE). In Phase 4, the final phase, the best model from Phase 3 was evaluated using the data from Seattle that was not part of the model training and tuning process in Phase 3. Results: Compared to the Linear Regression based model used by HUD et al (2019), HP-RNN significantly improved the prediction metrics of Coefficient of Determination (R2) from -11.73 to 0.88 and MSE by 99%. HP-RNN was then validated on the data from Seattle, WA, which showed a peak %error of 14.5% between the actual and the predicted count. Finally, the modeling results were collected to predict the trend during the COVID-19 pandemic. It shows a good correlation between the actual and the predicted homeless population, with the peak %error less than 8.6%. Conclusions and Implications: This work is the first work to apply RNN to model the time series of the homeless related data. The Model shows a close correlation between the actual and the predicted homeless population. There are two major implications of this result. First, the model can be used to predict the homeless population for the next several years, and the prediction can help the states and the cities plan ahead on affordable housing allocation and other community service to better prepare for the future. Moreover, this prediction can serve as a reference to policy makers and legislators as they seek to make changes that may impact the factors closely associated with the future homeless population trend.Keywords: homeless, prediction, model, RNN
Procedia PDF Downloads 119462 A Comparative Semantic Network Study between Chinese and Western Festivals
Authors: Jianwei Qian, Rob Law
Abstract:
With the expansion of globalization and the increment of market competition, the festival, especially the traditional one, has demonstrated its vitality under the new context. As a new tourist attraction, festivals play a critically important role in promoting the tourism economy, because the organization of a festival can engage more tourists, generate more revenues and win a wider media concern. However, in the current stage of China, traditional festivals as a way to disseminate national culture are undergoing the challenge of foreign festivals and the related culture. Different from those special events created solely for developing economy, traditional festivals have their own culture and connotation. Therefore, it is necessary to conduct a study on not only protecting the tradition, but promoting its development as well. This study conducts a comparative study of the development of China’s Valentine’s Day and Western Valentine’s Day under the Chinese context and centers on newspaper reports in China from 2000 to 2016. Based on the literature, two main research focuses can be established: one is concerned about the festival’s impact and the other is about tourists’ motivation to engage in a festival. Newspaper reports serve as the research discourse and can help cover the two focal points. With the assistance of content mining techniques, semantic networks for both Days are constructed separately to help depict the status quo of these two festivals in China. Based on the networks, two models are established to show the key component system of traditional festivals in the hope of perfecting the positive role festival tourism plays in the promotion of economy and culture. According to the semantic networks, newspaper reports on both festivals have similarities and differences. The difference is mainly reflected in its cultural connotation, because westerners and Chinese may show their love in different ways. Nevertheless, they share more common points in terms of economy, tourism, and society. They also have a similar living environment and stakeholders. Thus, they can be promoted together to revitalize some traditions in China. Three strategies are proposed to realize the aforementioned aim. Firstly, localize international festivals to suit the Chinese context to make it function better. Secondly, facilitate the internationalization process of traditional Chinese festivals to receive more recognition worldwide. Finally, allow traditional festivals to compete with foreign ones to help them learn from each other and elucidate the development of other festivals. It is believed that if all these can be realized, not only the traditional Chinese festivals can obtain a more promising future, but foreign ones are the same as well. Accordingly, the paper can contribute to the theoretical construction of festival images by the presentation of the semantic network. Meanwhile, the identified features and issues of festivals from two different cultures can enlighten the organization and marketing of festivals as a vital tourism activity. In the long run, the study can enhance the festival as a key attraction to keep the sustainable development of both the economy and the society.Keywords: Chinese context, comparative study, festival tourism, semantic network analysis, valentine’s day
Procedia PDF Downloads 230461 A Tutorial on Model Predictive Control for Spacecraft Maneuvering Problem with Theory, Experimentation and Applications
Authors: O. B. Iskender, K. V. Ling, V. Dubanchet, L. Simonini
Abstract:
This paper discusses the recent advances and future prospects of spacecraft position and attitude control using Model Predictive Control (MPC). First, the challenges of the space missions are summarized, in particular, taking into account the errors, uncertainties, and constraints imposed by the mission, spacecraft and, onboard processing capabilities. The summary of space mission errors and uncertainties provided in categories; initial condition errors, unmodeled disturbances, sensor, and actuator errors. These previous constraints are classified into two categories: physical and geometric constraints. Last, real-time implementation capability is discussed regarding the required computation time and the impact of sensor and actuator errors based on the Hardware-In-The-Loop (HIL) experiments. The rationales behind the scenarios’ are also presented in the scope of space applications as formation flying, attitude control, rendezvous and docking, rover steering, and precision landing. The objectives of these missions are explained, and the generic constrained MPC problem formulations are summarized. Three key design elements used in MPC design: the prediction model, the constraints formulation and the objective cost function are discussed. The prediction models can be linear time invariant or time varying depending on the geometry of the orbit, whether it is circular or elliptic. The constraints can be given as linear inequalities for input or output constraints, which can be written in the same form. Moreover, the recent convexification techniques for the non-convex geometrical constraints (i.e., plume impingement, Field-of-View (FOV)) are presented in detail. Next, different objectives are provided in a mathematical framework and explained accordingly. Thirdly, because MPC implementation relies on finding in real-time the solution to constrained optimization problems, computational aspects are also examined. In particular, high-speed implementation capabilities and HIL challenges are presented towards representative space avionics. This covers an analysis of future space processors as well as the requirements of sensors and actuators on the HIL experiments outputs. The HIL tests are investigated for kinematic and dynamic tests where robotic arms and floating robots are used respectively. Eventually, the proposed algorithms and experimental setups are introduced and compared with the authors' previous work and future plans. The paper concludes with a conjecture that MPC paradigm is a promising framework at the crossroads of space applications while could be further advanced based on the challenges mentioned throughout the paper and the unaddressed gap.Keywords: convex optimization, model predictive control, rendezvous and docking, spacecraft autonomy
Procedia PDF Downloads 110460 Exploring Instructional Designs on the Socio-Scientific Issues-Based Learning Method in Respect to STEM Education for Measuring Reasonable Ethics on Electromagnetic Wave through Science Attitudes toward Physics
Authors: Adisorn Banhan, Toansakul Santiboon, Prasong Saihong
Abstract:
Using the Socio-Scientific Issues-Based Learning Method is to compare of the blended instruction of STEM education with a sample consisted of 84 students in 2 classes at the 11th grade level in Sarakham Pittayakhom School. The 2-instructional models were managed of five instructional lesson plans in the context of electronic wave issue. These research procedures were designed of each instructional method through two groups, the 40-experimental student group was designed for the instructional STEM education (STEMe) and 40-controlling student group was administered with the Socio-Scientific Issues-Based Learning (SSIBL) methods. Associations between students’ learning achievements of each instructional method and their science attitudes of their predictions to their exploring activities toward physics with the STEMe and SSIBL methods were compared. The Measuring Reasonable Ethics Test (MRET) was assessed students’ reasonable ethics with the STEMe and SSIBL instructional design methods on two each group. Using the pretest and posttest technique to monitor and evaluate students’ performances of their reasonable ethics on electromagnetic wave issue in the STEMe and SSIBL instructional classes were examined. Students were observed and gained experience with the phenomena being studied with the Socio-Scientific Issues-Based Learning method Model. To support with the STEM that it was not just teaching about Science, Technology, Engineering, and Mathematics; it is a culture that needs to be cultivated to help create a problem solving, creative, critical thinking workforce for tomorrow in physics. Students’ attitudes were assessed with the Test Of Physics-Related Attitude (TOPRA) modified from the original Test Of Science-Related Attitude (TOSRA). Comparisons between students’ learning achievements of their different instructional methods on the STEMe and SSIBL were analyzed. Associations between students’ performances the STEMe and SSIBL instructional design methods of their reasonable ethics and their science attitudes toward physics were associated. These findings have found that the efficiency of the SSIBL and the STEMe innovations were based on criteria of the IOC value higher than evidence as 80/80 standard level. Statistically significant of students’ learning achievements to their later outcomes on the controlling and experimental groups with the SSIBL and STEMe were differentiated between students’ learning achievements at the .05 level. To compare between students’ reasonable ethics with the SSIBL and STEMe of students’ responses to their instructional activities in the STEMe is higher than the SSIBL instructional methods. Associations between students’ later learning achievements with the SSIBL and STEMe, the predictive efficiency values of the R2 indicate that 67% and 75% for the SSIBL, and indicate that 74% and 81% for the STEMe of the variances were attributable to their developing reasonable ethics and science attitudes toward physics, consequently.Keywords: socio-scientific issues-based learning method, STEM education, science attitudes, measurement, reasonable ethics, physics classes
Procedia PDF Downloads 291459 Experimental and Numerical Investigation of Fracture Behavior of Foamed Concrete Based on Three-Point Bending Test of Beams with Initial Notch
Authors: M. Kozłowski, M. Kadela
Abstract:
Foamed concrete is known for its low self-weight and excellent thermal and acoustic properties. For many years, it has been used worldwide for insulation to foundations and roof tiles, as backfill to retaining walls, sound insulation, etc. However, in the last years it has become a promising material also for structural purposes e.g. for stabilization of weak soils. Due to favorable properties of foamed concrete, many interests and studies were involved to analyze its strength, mechanical, thermal and acoustic properties. However, these studies do not cover the investigation of fracture energy which is the core factor governing the damage and fracture mechanisms. Only limited number of publications can be found in literature. The paper presents the results of experimental investigation and numerical campaign of foamed concrete based on three-point bending test of beams with initial notch. First part of the paper presents the results of a series of static loading tests performed to investigate the fracture properties of foamed concrete of varying density. Beam specimens with dimensions of 100×100×840 mm with a central notch were tested in three-point bending. Subsequently, remaining halves of the specimens with dimensions of 100×100×420 mm were tested again as un-notched beams in the same set-up with reduced distance between supports. The tests were performed in a hydraulic displacement controlled testing machine with a load capacity of 5 kN. Apart from measuring the loading and mid-span displacement, a crack mouth opening displacement (CMOD) was monitored. Based on the load – displacement curves of notched beams the values of fracture energy and tensile stress at failure were calculated. The flexural tensile strength was obtained on un-notched beams with dimensions of 100×100×420 mm. Moreover, cube specimens 150×150×150 mm were tested in compression to determine the compressive strength. Second part of the paper deals with numerical investigation of the fracture behavior of beams with initial notch presented in the first part of the paper. Extended Finite Element Method (XFEM) was used to simulate and analyze the damage and fracture process. The influence of meshing and variation of mechanical properties on results was investigated. Numerical models simulate correctly the behavior of beams observed during three-point bending. The numerical results show that XFEM can be used to simulate different fracture toughness of foamed concrete and fracture types. Using the XFEM and computer simulation technology allow for reliable approximation of load–bearing capacity and damage mechanisms of beams made of foamed concrete, which provides some foundations for realistic structural applications.Keywords: foamed concrete, fracture energy, three-point bending, XFEM
Procedia PDF Downloads 300458 Computational Homogenization of Thin Walled Structures: On the Influence of the Global vs Local Applied Plane Stress Condition
Authors: M. Beusink, E. W. C. Coenen
Abstract:
The increased application of novel structural materials, such as high grade asphalt, concrete and laminated composites, has sparked the need for a better understanding of the often complex, non-linear mechanical behavior of such materials. The effective macroscopic mechanical response is generally dependent on the applied load path. Moreover, it is also significantly influenced by the microstructure of the material, e.g. embedded fibers, voids and/or grain morphology. At present, multiscale techniques are widely adopted to assess micro-macro interactions in a numerically efficient way. Computational homogenization techniques have been successfully applied over a wide range of engineering cases, e.g. cases involving first order and second order continua, thin shells and cohesive zone models. Most of these homogenization methods rely on Representative Volume Elements (RVE), which model the relevant microstructural details in a confined volume. Imposed through kinematical constraints or boundary conditions, a RVE can be subjected to a microscopic load sequence. This provides the RVE's effective stress-strain response, which can serve as constitutive input for macroscale analyses. Simultaneously, such a study of a RVE gives insight into fine scale phenomena such as microstructural damage and its evolution. It has been reported by several authors that the type of boundary conditions applied to the RVE affect the resulting homogenized stress-strain response. As a consequence, dedicated boundary conditions have been proposed to appropriately deal with this concern. For the specific case of a planar assumption for the analyzed structure, e.g. plane strain, axisymmetric or plane stress, this assumption needs to be addressed consistently in all considered scales. Although in many multiscale studies a planar condition has been employed, the related impact on the multiscale solution has not been explicitly investigated. This work therefore focuses on the influence of the planar assumption for multiscale modeling. In particular the plane stress case is highlighted, by proposing three different implementation strategies which are compatible with a first-order computational homogenization framework. The first method consists of applying classical plane stress theory at the microscale, whereas with the second method a generalized plane stress condition is assumed at the RVE level. For the third method, the plane stress condition is applied at the macroscale by requiring that the resulting macroscopic out-of-plane forces are equal to zero. These strategies are assessed through a numerical study of a thin walled structure and the resulting effective macroscale stress-strain response is compared. It is shown that there is a clear influence of the length scale at which the planar condition is applied.Keywords: first-order computational homogenization, planar analysis, multiscale, microstrucutures
Procedia PDF Downloads 233457 Development of Intellectual Property Information Services in Zimbabwe’s University Libraries: Assessing the Current Status and Mapping the Future Direction
Authors: Jonathan Munyoro, Takawira Machimbidza, Stephen Mutula
Abstract:
The study investigates the current status of Intellectual Property (IP) information services in Zimbabwe's university libraries. Specifically, the study assesses the current IP information services offered in Zimbabwe’s university libraries, identifies challenges to the development of comprehensive IP information services in Zimbabwe’s university libraries, and suggests solutions for the development of IP information services in Zimbabwe’s university libraries. The study is born out of a realisation that research on IP information services in university libraries has received little attention, especially in developing country contexts, despite the fact that there are calls for heightened participation of university libraries in IP information services. In Zimbabwe, the launch of the National Intellectual Property Policy and Implementation Strategy 2018-2022 and the introduction of the Education 5.0 concept are set to significantly change the IP landscape in the country. Education 5.0 places more emphasis on innovation and industrialisation (in addition to teaching, community service, and research), and has the potential to shift the focus and level of IP output produced in higher and tertiary education institutions beyond copyrights and more towards commercially exploited patents, utility models, and industrial designs. The growing importance of IP commercialisation in universities creates a need for appropriate IP information services to assist students, academics, researchers, administrators, start-ups, entrepreneurs, and inventors. The critical challenge for university libraries is to reposition themselves and remain relevant in the new trajectory. Designing specialised information services to support increased IP generation and commercialisation appears to be an opportunity for university libraries to stay relevant in the knowledge economy. However, IP information services in Zimbabwe’s universities appear to be incomplete and focused mostly on assisting with research publications and copyright-related activities. Research on the existing status of IP services in university libraries in Zimbabwe is therefore necessary to help identify gaps and provide solutions in order to stimulate the growth of new forms of such services. The study employed a quantitative approach. An online questionnaire was administered to 57 academic librarians from 15 university libraries. Findings show that the current focus of the surveyed institutions is on providing scientific research support services (15); disseminating/sharing university research output (14); and copyright activities (12). More specialised IP information services such as IP education and training, patent information services, IP consulting services, IP online service platforms, and web-based IP information services are largely unavailable in Zimbabwean university libraries. Results reveal that the underlying challenge in the development of IP information services in Zimbabwe's university libraries is insufficient IP knowledge among academic librarians, which is exacerbated by inadequate IP management frameworks in university institutions. The study proposes a framework for the entrenchment of IP information services in Zimbabwe's university libraries.Keywords: academic libraries, information services, intellectual property, IP knowledge, university libraries, Zimbabwe
Procedia PDF Downloads 154456 Care Experience of a Female Breast Cancer Patient Undergoing Modified Radical Mastectomy
Authors: Ting-I Lin
Abstract:
Purpose: This article explores the care experience of a 34-year-old female breast cancer patient who was admitted to the intensive care unit after undergoing a modified radical mastectomy. The patient discovered a lump in her right breast during a self-examination and, after mammography and ultrasound-guided biopsy, was diagnosed with a malignant tumor in the right breast. The tumor measured 1.5 x 1.4 x 2 cm, and the patient underwent a modified radical mastectomy. Postoperatively, she exhibited feelings of inferiority due to changes in her appearance. Method: During the care period, we engaged in conversations, observations, and active listening, using Gordon's Eleven Functional Health Patterns for a comprehensive assessment. In collaboration with the critical care team, a psychologist, and an oncology case manager, we conducted an interdisciplinary discussion and reached a consensus on key nursing issues. These included pain related to postoperative tumor excision and disturbed body image due to changes in appearance after surgery. Result: During the care period, a private space was provided to encourage the patient to express her feelings about her altered body image. Communication was conducted through active listening and a non-judgmental approach. The patient's anxiety level, as measured by the depression and anxiety scale, decreased from moderate to mild, and she was able to sleep for 6-8 hours at night. The oncology case manager was invited to provide education on breast reconstruction using breast models and videos to both the patient and her husband. This helped rebuild the patient's confidence. With the patient's consent, a support group was arranged where a peer with a similar experience shared her journey, offering emotional support and encouragement. This helped alleviate the psychological stress and shock caused by the cancer diagnosis. Additionally, pain management was achieved through adjusting the dosage of analgesics, administering Ultracet 37.5 mg/325 mg 1# Q6H PO, along with distraction techniques and acupressure therapy. These interventions helped the patient relax and alleviate discomfort, maintaining her pain score at a manageable level of 3, indicating mild pain. Conclusion: Disturbance in body image can cause significant psychological stress for patients. Through support group discussions, encouraging patients to express their feelings, and providing appropriate education on breast reconstruction and dressing techniques, the patient's self-concept was positively reinforced, and her emotions were stabilized. This led to renewed self-worth and confidence.Keywords: breast cancer, modified radical mastectomy, acupressure therapy, Gordon's 11 functional health patterns
Procedia PDF Downloads 27455 A Theragnostic Approach for Alzheimer’s Disease Focused on Phosphorylated Tau
Authors: Tomás Sobrino, Lara García-Varela, Marta Aramburu-Núñez, Mónica Castro, Noemí Gómez-Lado, Mariña Rodríguez-Arrizabalaga, Antía Custodia, Juan Manuel Pías-Peleteiro, José Manuel Aldrey, Daniel Romaus-Sanjurjo, Ángeles Almeida, Pablo Aguiar, Alberto Ouro
Abstract:
Introduction: Alzheimer’s disease (AD) and other tauopathies are primary causes of dementia, causing progressive cognitive deterioration that entails serious repercussions for the patients' performance of daily tasks. Currently, there is no effective approach for the early diagnosis and treatment of AD and tauopathies. This study suggests a theragnostic approach based on the importance of phosphorylated tau protein (p-Tau) in the early pathophysiological processes of AD. We have developed a novel theragnostic monoclonal antibody (mAb) to provide both diagnostic and therapeutic effects. Methods/Results: We have developed a p-Tau mAb, which was doped with deferoxamine for radiolabeling with Zirconium-89 (89Zr) for PET imaging, as well as fluorescence dies for immunofluorescence assays. The p-Tau mAb was evaluated in vitro for toxicity by MTT assay, LDH activity, propidium iodide/Annexin V assay, caspase-3, and mitochondrial membrane potential (MMP) assay in both mouse endothelial cell line (bEnd.3) and cortical primary neurons cell cultures. Importantly, non-toxic effects (up to concentrations of p-Tau mAb greater than 100 ug/mL) were detected. In vivo experiments in the tauopathy model mice (PS19) show that the 89Zr-pTau-mAb and 89Zr-Fragments-pTau-mAb are stable in circulation for up to 10 days without toxic effects. However, only less than 0.2% reached the brain, so further strategies have to be designed for crossing the Brain-Blood-Barrier (BBB). Moreover, an intraparenchymal treatment strategy was carried out. The PS19 mice were operated to implement osmotic pumps (Alzet 1004) at two different times, at 4 and 7 months, to stimulate the controlled release for one month each of the B6 antibody or the IgG1 control antibody. We demonstrated that B6-treated mice maintained their motor and memory abilities significantly compared with IgG1 treatment. In addition, we observed a significant reduction in p-Tau deposits in the brain. Conclusions /Discussion: A theragnostic pTau-mAb was developed. Moreover, we demonstrated that our p-Tau mAb recognizes very-early pathology forms of p-Tau by non-invasive techniques, such as PET. In addition, p-Tau mAb has non-toxic effects, both in vitro and in vivo. Although the p-Tau mAb is stable in circulation, only 0.2% achieve the brain. However, direct intraventricular treatment significantly reduces cognitive impairment in Alzheimer's animal models, as well as the accumulation of toxic p-Tau species.Keywords: alzheimer's disease, theragnosis, tau, PET, immunotherapy, tauopathies
Procedia PDF Downloads 68454 On Panel Data Analysis of Factors on Economic Advances in Some African Countries
Authors: Ayoola Femi J., Kayode Balogun
Abstract:
In some African Countries, increase in Gross Domestic Products (GDP) has not translated to real development as expected by common-man in his household. For decades, a lot of contests on economic growth and development has been a nagging issues. The focus of this study is to analysing the effects of economic determinants/factors on economic advances in some African Countries by employing panel data analysis. The yearly (1990-2013) data were obtained from the world economic outlook database of the International Monetary Fund (IMF), for probing the effects of these variables on growth rate in some selected African countries which include: Nigeria, Algeria, Angola, Benin, Botswana, Burundi, Cape-Verde, Cameroun, Central African Republic, Chad, Republic Of Congo, Cote di’ Voire, Egypt, Equatorial-Guinea, Ethiopia, Gabon, Ghana, Guinea Bissau, Kenya, Lesotho, Madagascar, Mali, Mauritius, Morocco, Mozambique, Niger, Rwanda, Senegal, Seychelles, Sierra Leone, South Africa, Sudan, Swaziland, Tanzania, Togo, Tunisia, and Uganda. The effects of 6 macroeconomic variables on GDP were critically examined. We used 37 Countries GDP as our dependent variable and 6 independent variables used in this study include: Total Investment (totinv), Inflation (inf), Population (popl), current account balance (cab), volume of imports of goods and services (vimgs), and volume of exports of goods and services (vexgs). The results of our analysis shows that total investment, population and volume of exports of goods and services strongly affect the economic growth. We noticed that population of these selected countries positively affect the GDP while total investment and volume of exports negatively affect GDP. On the contrary, inflation, current account balance and volume of imports of goods and services’ contribution to the GDP are insignificant. The results of our analysis shows that total investment, population and volume of exports of goods and services strongly affect the economic growth. We noticed that population of these selected countries positively affect the GDP while total investment and volume of exports negatively affect GDP. On the contrary, inflation, current account balance and volume of imports of goods and services’ contribution to the GDP are insignificant. The results of this study would be useful for individual African governments for developing a suitable and appropriate economic policies and strategies. It will also help investors to understand the economic nature and viability of Africa as a continent as well as its individual countries.Keywords: African countries, economic growth and development, gross domestic products, static panel data models
Procedia PDF Downloads 475453 Modeling Engagement with Multimodal Multisensor Data: The Continuous Performance Test as an Objective Tool to Track Flow
Authors: Mohammad H. Taheri, David J. Brown, Nasser Sherkat
Abstract:
Engagement is one of the most important factors in determining successful outcomes and deep learning in students. Existing approaches to detect student engagement involve periodic human observations that are subject to inter-rater reliability. Our solution uses real-time multimodal multisensor data labeled by objective performance outcomes to infer the engagement of students. The study involves four students with a combined diagnosis of cerebral palsy and a learning disability who took part in a 3-month trial over 59 sessions. Multimodal multisensor data were collected while they participated in a continuous performance test. Eye gaze, electroencephalogram, body pose, and interaction data were used to create a model of student engagement through objective labeling from the continuous performance test outcomes. In order to achieve this, a type of continuous performance test is introduced, the Seek-X type. Nine features were extracted including high-level handpicked compound features. Using leave-one-out cross-validation, a series of different machine learning approaches were evaluated. Overall, the random forest classification approach achieved the best classification results. Using random forest, 93.3% classification for engagement and 42.9% accuracy for disengagement were achieved. We compared these results to outcomes from different models: AdaBoost, decision tree, k-Nearest Neighbor, naïve Bayes, neural network, and support vector machine. We showed that using a multisensor approach achieved higher accuracy than using features from any reduced set of sensors. We found that using high-level handpicked features can improve the classification accuracy in every sensor mode. Our approach is robust to both sensor fallout and occlusions. The single most important sensor feature to the classification of engagement and distraction was shown to be eye gaze. It has been shown that we can accurately predict the level of engagement of students with learning disabilities in a real-time approach that is not subject to inter-rater reliability, human observation or reliant on a single mode of sensor input. This will help teachers design interventions for a heterogeneous group of students, where teachers cannot possibly attend to each of their individual needs. Our approach can be used to identify those with the greatest learning challenges so that all students are supported to reach their full potential.Keywords: affective computing in education, affect detection, continuous performance test, engagement, flow, HCI, interaction, learning disabilities, machine learning, multimodal, multisensor, physiological sensors, student engagement
Procedia PDF Downloads 93452 Predicting Mortality among Acute Burn Patients Using BOBI Score vs. FLAMES Score
Authors: S. Moustafa El Shanawany, I. Labib Salem, F. Mohamed Magdy Badr El Dine, H. Tag El Deen Abd Allah
Abstract:
Thermal injuries remain a global health problem and a common issue encountered in forensic pathology. They are a devastating cause of morbidity and mortality in children and adults especially in developing countries, causing permanent disfigurement, scarring and grievous hurt. Burns have always been a matter of legal concern in cases of suicidal burns, self-inflicted burns for false accusation and homicidal attempts. Assessment of burn injuries as well as rating permanent disabilities and disfigurement following thermal injuries for the benefit of compensation claims represents a challenging problem. This necessitates the development of reliable scoring systems to yield an expected likelihood of permanent disability or fatal outcome following burn injuries. The study was designed to identify the risk factors of mortality in acute burn patients and to evaluate the applicability of FLAMES (Fatality by Longevity, APACHE II score, Measured Extent of burn, and Sex) and BOBI (Belgian Outcome in Burn Injury) model scores in predicting the outcome. The study was conducted on 100 adult patients with acute burn injuries admitted to the Burn Unit of Alexandria Main University Hospital, Egypt from October 2014 to October 2015. Victims were examined after obtaining informed consent and the data were collected in specially designed sheets including demographic data, burn details and any associated inhalation injury. Each burn patient was assessed using both BOBI and FLAMES scoring systems. The results of the study show the mean age of patients was 35.54±12.32 years. Males outnumbered females (55% and 45%, respectively). Most patients were accidently burnt (95%), whereas suicidal burns accounted for the remaining 5%. Flame burn was recorded in 82% of cases. As well, 8% of patients sustained more than 60% of total burn surface area (TBSA) burns, 19% of patients needed mechanical ventilation, and 19% of burnt patients died either from wound sepsis, multi-organ failure or pulmonary embolism. The mean length of hospital stay was 24.91±25.08 days. The mean BOBI score was 1.07±1.27 and that of the FLAMES score was -4.76±2.92. The FLAMES score demonstrated an area under the receiver operating characteristic (ROC) curve of 0.95 which was significantly higher than that of the BOBI score (0.883). A statistically significant association was revealed between both predictive models and the outcome. The study concluded that both scoring systems were beneficial in predicting mortality in acutely burnt patients. However, the FLAMES score could be applied with a higher level of accuracy.Keywords: BOBI, burns, FLAMES, scoring systems, outcome
Procedia PDF Downloads 334451 Quantifying the Effects of Canopy Cover and Cover Crop Species on Water Use Partitioning in Micro-Sprinkler Irrigated Orchards in South Africa
Authors: Zanele Ntshidi, Sebinasi Dzikiti, Dominic Mazvimavi
Abstract:
South Africa is a dry country and yet it is ranked as the 8th largest exporter of fresh apples (Malus Domestica) globally. Prime apple producing regions are in the Eastern and Western Cape Provinces of the country where all the fruit is grown under irrigation. Climate change models predict increasingly drier future conditions in these regions and the frequency and severity of droughts is expected to increase. For the sustainability and growth of the fruit industry it is important to minimize non-beneficial water losses from the orchard floor. The aims of this study were firstly to compare the water use of cover crop species used in South African orchards for which there is currently no information. The second aim was to investigate how orchard water use (evapotranspiration) was partitioned into beneficial (tree transpiration) and non-beneficial (orchard floor evaporation) water uses for micro-sprinkler irrigated orchards with different canopy covers. This information is important in order to explore opportunities to minimize non-beneficial water losses. Six cover crop species (four exotic and two indigenous) were grown in 2 L pots in a greenhouse. Cover crop transpiration was measured using the gravimetric method on clear days. To establish how water use was partitioned in orchards, evapotranspiration (ET) was measured using an open path eddy covariance system, while tree transpiration was measured hourly throughout the season (October to June) on six trees per orchard using the heat ratio sap flow method. On selected clear days, soil evaporation was measured hourly from sunrise to sunset using six micro-lysimeters situated at different wet/dry and sun/shade positions on the orchard floor. Transpiration of cover crops was measured using miniature (2 mm Ø) stem heat balance sap flow gauges. The greenhouse study showed that exotic cover crops had significantly higher (p < 0.01) average transpiration rates (~3.7 L/m2/d) than the indigenous species (~ 2.2 L/m²/d). In young non-bearing orchards, orchard floor evaporative fluxes accounted for more than 60% of orchard ET while this ranged from 10 to 30% in mature orchards with a high canopy cover. While exotic cover crops are preferred by most farmers, this study shows that they use larger quantities of water than indigenous species. This in turn contributes to a larger orchard floor evaporation flux. In young orchards non-beneficial losses can be minimized by adopting drip or short range micro-sprinkler methods that reduce the wetted soil fraction thereby conserving water.Keywords: evapotranspiration, sap flow, soil evaporation, transpiration
Procedia PDF Downloads 387450 From Preoccupied Attachment Pattern to Depression: Serial Mediation Model on the Female Sample
Authors: Tatjana Stefanovic Stanojevic, Milica Tosic Radev, Aleksandra Bogdanovic
Abstract:
Depression is considered to be a leading cause of death and disability in the female population, and that is the reason why understanding the dynamics of the onset of depressive symptomatology is important. A review of the literature indicates the relationship between depressive symptoms and insecure attachment patterns, but very few studies have examined the mechanism underlying this relation. The aim of the study was to examine the pathway from the preoccupied attachment pattern to depressive symptomatology, as well as to test the mediation effect of mentalization, social anxiety and rumination in this relationship using a serial mediation model. The research was carried out on a geographical cluster sample from the general population of Serbia included within the project ‘Indicators and models of family and work roles harmonization’ funded by the Ministry of Education, Science and Technological Development of the Republic of Serbia. This research was carried out on a subsample of 791 working-age female adults from 37 urban and rural locations distributed through 20 administrative districts of Serbia. The respondents filled in a battery of instruments, including Relationship Questionnaire - Clinical Version (RQ - CV), The Mentalization Scale (MentS), Scale of Social Anxiety (SA), Patient Ruminative Thought Style Questionnaire (RTSQ), Health Questionnaire (PHQ-9). The results confirm our assumption that the total indirect effect of the preoccupied attachment pattern to depressive symptoms is significant across all mediators separately. More importantly, this effect is still present in a model with a sequential mediator relationship, where social anxiety, rumination, and mentalization were perceived as serial mediators of a relationship between preoccupied attachment and depressive symptoms (estimated indirect effect=0.004, boot-strapped 95% CI=0.002 to 0.007). Our findings suggest that there is a significant specific indirect effect of the preoccupied attachment pattern to depressive symptoms, occurring through mentalization, social anxiety and rumination, indicating that preoccupied attachment cause decrease of a self related mentalization, which in turn causes increasing of social anxiety and rumination, concluding in depressive symptoms as a final consequence. The finding that the path from the preoccupied attachment pattern to depressive symptoms is typical in women is understandable from the perspective of both evolutionary and culturally conditioned gender differences. The practical implications of the study are reflected in the recommendations for the prevention and forehand psychotherapy response among preoccupied women with depressive symptomatology. Treatment of this specific group of depressed patients should be focused on strengthening mentalization, learning to accept and to understand herself better, reducing anxiety in situations where mistakes are visible to others, and replacing the rumination strategy with more constructive coping strategies.Keywords: preoccupied attachment, depression, serial mediation model, mentalization, rumination
Procedia PDF Downloads 140449 Geoinformation Technology of Agricultural Monitoring Using Multi-Temporal Satellite Imagery
Authors: Olena Kavats, Dmitry Khramov, Kateryna Sergieieva, Vladimir Vasyliev, Iurii Kavats
Abstract:
Geoinformation technologies of space agromonitoring are a means of operative decision making support in the tasks of managing the agricultural sector of the economy. Existing technologies use satellite images in the optical range of electromagnetic spectrum. Time series of optical images often contain gaps due to the presence of clouds and haze. A geoinformation technology is created. It allows to fill gaps in time series of optical images (Sentinel-2, Landsat-8, PROBA-V, MODIS) with radar survey data (Sentinel-1) and use information about agrometeorological conditions of the growing season for individual monitoring years. The technology allows to perform crop classification and mapping for spring-summer (winter and spring crops) and autumn-winter (winter crops) periods of vegetation, monitoring the dynamics of crop state seasonal changes, crop yield forecasting. Crop classification is based on supervised classification algorithms, takes into account the peculiarities of crop growth at different vegetation stages (dates of sowing, emergence, active vegetation, and harvesting) and agriculture land state characteristics (row spacing, seedling density, etc.). A catalog of samples of the main agricultural crops (Ukraine) is created and crop spectral signatures are calculated with the preliminary removal of row spacing, cloud cover, and cloud shadows in order to construct time series of crop growth characteristics. The obtained data is used in grain crop growth tracking and in timely detection of growth trends deviations from reference samples of a given crop for a selected date. Statistical models of crop yield forecast are created in the forms of linear and nonlinear interconnections between crop yield indicators and crop state characteristics (temperature, precipitation, vegetation indices, etc.). Predicted values of grain crop yield are evaluated with an accuracy up to 95%. The developed technology was used for agricultural areas monitoring in a number of Great Britain and Ukraine regions using EOS Crop Monitoring Platform (https://crop-monitoring.eos.com). The obtained results allow to conclude that joint use of Sentinel-1 and Sentinel-2 images improve separation of winter crops (rapeseed, wheat, barley) in the early stages of vegetation (October-December). It allows to separate successfully the soybean, corn, and sunflower sowing areas that are quite similar in their spectral characteristics.Keywords: geoinformation technology, crop classification, crop yield prediction, agricultural monitoring, EOS Crop Monitoring Platform
Procedia PDF Downloads 453448 Predictive Pathogen Biology: Genome-Based Prediction of Pathogenic Potential and Countermeasures Targets
Authors: Debjit Ray
Abstract:
Horizontal gene transfer (HGT) and recombination leads to the emergence of bacterial antibiotic resistance and pathogenic traits. HGT events can be identified by comparing a large number of fully sequenced genomes across a species or genus, define the phylogenetic range of HGT, and find potential sources of new resistance genes. In-depth comparative phylogenomics can also identify subtle genome or plasmid structural changes or mutations associated with phenotypic changes. Comparative phylogenomics requires that accurately sequenced, complete and properly annotated genomes of the organism. Assembling closed genomes requires additional mate-pair reads or “long read” sequencing data to accompany short-read paired-end data. To bring down the cost and time required of producing assembled genomes and annotating genome features that inform drug resistance and pathogenicity, we are analyzing the performance for genome assembly of data from the Illumina NextSeq, which has faster throughput than the Illumina HiSeq (~1-2 days versus ~1 week), and shorter reads (150bp paired-end versus 300bp paired end) but higher capacity (150-400M reads per run versus ~5-15M) compared to the Illumina MiSeq. Bioinformatics improvements are also needed to make rapid, routine production of complete genomes a reality. Modern assemblers such as SPAdes 3.6.0 running on a standard Linux blade are capable in a few hours of converting mixes of reads from different library preps into high-quality assemblies with only a few gaps. Remaining breaks in scaffolds are generally due to repeats (e.g., rRNA genes) are addressed by our software for gap closure techniques, that avoid custom PCR or targeted sequencing. Our goal is to improve the understanding of emergence of pathogenesis using sequencing, comparative genomics, and machine learning analysis of ~1000 pathogen genomes. Machine learning algorithms will be used to digest the diverse features (change in virulence genes, recombination, horizontal gene transfer, patient diagnostics). Temporal data and evolutionary models can thus determine whether the origin of a particular isolate is likely to have been from the environment (could it have evolved from previous isolates). It can be useful for comparing differences in virulence along or across the tree. More intriguing, it can test whether there is a direction to virulence strength. This would open new avenues in the prediction of uncharacterized clinical bugs and multidrug resistance evolution and pathogen emergence.Keywords: genomics, pathogens, genome assembly, superbugs
Procedia PDF Downloads 196447 Predicting Blockchain Technology Installation Cost in Supply Chain System through Supervised Learning
Authors: Hossein Havaeji, Tony Wong, Thien-My Dao
Abstract:
1. Research Problems and Research Objectives: Blockchain Technology-enabled Supply Chain System (BT-enabled SCS) is the system using BT to drive SCS transparency, security, durability, and process integrity as SCS data is not always visible, available, or trusted. The costs of operating BT in the SCS are a common problem in several organizations. The costs must be estimated as they can impact existing cost control strategies. To account for system and deployment costs, it is necessary to overcome the following hurdle. The problem is that the costs of developing and running a BT in SCS are not yet clear in most cases. Many industries aiming to use BT have special attention to the importance of BT installation cost which has a direct impact on the total costs of SCS. Predicting BT installation cost in SCS may help managers decide whether BT is to be an economic advantage. The purpose of the research is to identify some main BT installation cost components in SCS needed for deeper cost analysis. We then identify and categorize the main groups of cost components in more detail to utilize them in the prediction process. The second objective is to determine the suitable Supervised Learning technique in order to predict the costs of developing and running BT in SCS in a particular case study. The last aim is to investigate how the running BT cost can be involved in the total cost of SCS. 2. Work Performed: Applied successfully in various fields, Supervised Learning is a method to set the data frame, treat the data, and train/practice the method sort. It is a learning model directed to make predictions of an outcome measurement based on a set of unforeseen input data. The following steps must be conducted to search for the objectives of our subject. The first step is to make a literature review to identify the different cost components of BT installation in SCS. Based on the literature review, we should choose some Supervised Learning methods which are suitable for BT installation cost prediction in SCS. According to the literature review, some Supervised Learning algorithms which provide us with a powerful tool to classify BT installation components and predict BT installation cost are the Support Vector Regression (SVR) algorithm, Back Propagation (BP) neural network, and Artificial Neural Network (ANN). Choosing a case study to feed data into the models comes into the third step. Finally, we will propose the best predictive performance to find the minimum BT installation costs in SCS. 3. Expected Results and Conclusion: This study tends to propose a cost prediction of BT installation in SCS with the help of Supervised Learning algorithms. At first attempt, we will select a case study in the field of BT-enabled SCS, and then use some Supervised Learning algorithms to predict BT installation cost in SCS. We continue to find the best predictive performance for developing and running BT in SCS. Finally, the paper will be presented at the conference.Keywords: blockchain technology, blockchain technology-enabled supply chain system, installation cost, supervised learning
Procedia PDF Downloads 119446 Evolutionary Advantages of Loneliness with an Agent-Based Model
Authors: David Gottlieb, Jason Yoder
Abstract:
The feeling of loneliness is not uncommon in modern society, and yet, there is a fundamental lack of understanding in its origins and purpose in nature. One interpretation of loneliness is that it is a subjective experience that punishes a lack of social behavior, and thus its emergence in human evolution is seemingly tied to the survival of early human tribes. Still, a common counterintuitive response to loneliness is a state of hypervigilance, resulting in social withdrawal, which may appear maladaptive to modern society. So far, no computational model of loneliness’ effect during evolution yet exists; however, agent-based models (ABM) can be used to investigate social behavior, and applying evolution to agents’ behaviors can demonstrate selective advantages for particular behaviors. We propose an ABM where each agent contains four social behaviors, and one goal-seeking behavior, letting evolution select the best behavioral patterns for resource allocation. In our paper, we use an algorithm similar to the boid model to guide the behavior of agents, but expand the set of rules that govern their behavior. While we use cohesion, separation, and alignment for simple social movement, our expanded model adds goal-oriented behavior, which is inspired by particle swarm optimization, such that agents move relative to their personal best position. Since agents are given the ability to form connections by interacting with each other, our final behavior guides agent movement toward its social connections. Finally, we introduce a mechanism to represent a state of loneliness, which engages when an agent's perceived social involvement does not meet its expected social involvement. This enables us to investigate a minimal model of loneliness, and using evolution we attempt to elucidate its value in human survival. Agents are placed in an environment in which they must acquire resources, as their fitness is based on the total resource collected. With these rules in place, we are able to run evolution under various conditions, including resource-rich environments, and when disease is present. Our simulations indicate that there is strong selection pressure for social behavior under circumstances where there is a clear discrepancy between initial resource locations, and against social behavior when disease is present, mirroring hypervigilance. This not only provides an explanation for the emergence of loneliness, but also reflects the diversity of response to loneliness in the real world. In addition, there is evidence of a richness of social behavior when loneliness was present. By introducing just two resource locations, we observed a divergence in social motivation after agents became lonely, where one agent learned to move to the other, who was in a better resource position. The results and ongoing work from this project show that it is possible to glean insight into the evolutionary advantages of even simple mechanisms of loneliness. The model we developed has produced unexpected results and has led to more questions, such as the impact loneliness would have at a larger scale, or the effect of creating a set of rules governing interaction beyond adjacency.Keywords: agent-based, behavior, evolution, loneliness, social
Procedia PDF Downloads 94445 Mathematical Modeling of Nonlinear Process of Assimilation
Authors: Temur Chilachava
Abstract:
In work the new nonlinear mathematical model describing assimilation of the people (population) with some less widespread language by two states with two various widespread languages, taking into account demographic factor is offered. In model three subjects are considered: the population and government institutions with the widespread first language, influencing by means of state and administrative resources on the third population with some less widespread language for the purpose of their assimilation; the population and government institutions with the widespread second language, influencing by means of state and administrative resources on the third population with some less widespread language for the purpose of their assimilation; the third population (probably small state formation, an autonomy), exposed to bilateral assimilation from two rather powerful states. Earlier by us it was shown that in case of zero demographic factor of all three subjects, the population with less widespread language completely assimilates the states with two various widespread languages, and the result of assimilation (redistribution of the assimilated population) is connected with initial quantities, technological and economic capabilities of the assimilating states. In considered model taking into account demographic factor natural decrease in the population of the assimilating states and a natural increase of the population which has undergone bilateral assimilation is supposed. At some ratios between coefficients of natural change of the population of the assimilating states, and also assimilation coefficients, for nonlinear system of three differential equations are received the two first integral. Cases of two powerful states assimilating the population of small state formation (autonomy), with different number of the population, both with identical and with various economic and technological capabilities are considered. It is shown that in the first case the problem is actually reduced to nonlinear system of two differential equations describing the classical model "predator - the victim", thus, naturally a role of the victim plays the population which has undergone assimilation, and a predator role the population of one of the assimilating states. The population of the second assimilating state in the first case changes in proportion (the coefficient of proportionality is equal to the relation of the population of assimilators in an initial time point) to the population of the first assimilator. In the second case the problem is actually reduced to nonlinear system of two differential equations describing type model "a predator – the victim", with the closed integrated curves on the phase plane. In both cases there is no full assimilation of the population to less widespread language. Intervals of change of number of the population of all three objects of model are found. The considered mathematical models which in some approach can model real situations, with the real assimilating countries and the state formations (an autonomy or formation with the unrecognized status), undergone to bilateral assimilation, show that for them the only possibility to avoid from assimilation is the natural demographic increase in population and hope for natural decrease in the population of the assimilating states.Keywords: nonlinear mathematical model, bilateral assimilation, demographic factor, first integrals, result of assimilation, intervals of change of number of the population
Procedia PDF Downloads 469444 Biotechnological Methods for the Grouting of the Tunneling Space
Authors: V. Ivanov, J. Chu, V. Stabnikov
Abstract:
Different biotechnological methods for the production of construction materials and for the performance of construction processes in situ are developing within a new scientific discipline of Construction Biotechnology. The aim of this research was to develop and test new biotechnologies and biotechnological grouts for the minimization of the hydraulic conductivity of the fractured rocks and porous soil. This problem is essential to minimize flow rate of groundwater into the construction sites, the tunneling space before and after excavation, inside levies, as well as to stop water seepage from the aquaculture ponds, agricultural channels, radioactive waste or toxic chemicals storage sites, from the landfills or from the soil-polluted sites. The conventional fine or ultrafine cement grouts or chemical grouts have such restrictions as high cost, viscosity, sometime toxicity but the biogrouts, which are based on microbial or enzymatic activities and some not expensive inorganic reagents, could be more suitable in many cases because of lower cost and low or zero toxicity. Due to these advantages, development of biotechnologies for biogrouting is going exponentially. However, most popular at present biogrout, which is based on activity of urease- producing bacteria initiating crystallization of calcium carbonate from calcium salt has such disadvantages as production of toxic ammonium/ammonia and development of high pH. Therefore, the aim of our studies was development and testing of new biogrouts that are environmentally friendly and have low cost suitable for large scale geotechnical, construction, and environmental applications. New microbial biotechnologies have been studied and tested in the sand columns, fissured rock samples, in 1 m3 tank with sand, and in the pack of stone sheets that were the models of the porous soil and fractured rocks. Several biotechnological methods showed positive results: 1) biogrouting using sequential desaturation of sand by injection of denitrifying bacteria and medium following with biocementation using urease-producing bacteria, urea and calcium salt decreased hydraulic conductivity of sand to 2×10-7 ms-1 after 17 days of treatment and consumed almost three times less reagents than conventional calcium-and urea-based biogrouting; 2) biogrouting using slime-producing bacteria decreased hydraulic conductivity of sand to 1x10-6 ms-1 after 15 days of treatment; 3) biogrouting of the rocks with the width of the fissures 65×10-6 m using calcium bicarbonate solution, that was produced from CaCO3 and CO2 under 30 bars pressure, decreased hydraulic conductivity of the fissured rocks to 2×10-7 ms-1 after 5 days of treatment. These bioclogging technologies could have a lot of advantages over conventional construction materials and processes and can be used in geotechnical engineering, agriculture and aquaculture, and for the environmental protection.Keywords: biocementation, bioclogging, biogrouting, fractured rocks, porous soil, tunneling space
Procedia PDF Downloads 207443 Recognition by the Voice and Speech Features of the Emotional State of Children by Adults and Automatically
Authors: Elena E. Lyakso, Olga V. Frolova, Yuri N. Matveev, Aleksey S. Grigorev, Alexander S. Nikolaev, Viktor A. Gorodnyi
Abstract:
The study of the children’s emotional sphere depending on age and psychoneurological state is of great importance for the design of educational programs for children and their social adaptation. Atypical development may be accompanied by violations or specificities of the emotional sphere. To study characteristics of the emotional state reflection in the voice and speech features of children, the perceptual study with the participation of adults and the automatic recognition of speech were conducted. Speech of children with typical development (TD), with Down syndrome (DS), and with autism spectrum disorders (ASD) aged 6-12 years was recorded. To obtain emotional speech in children, model situations were created, including a dialogue between the child and the experimenter containing questions that can cause various emotional states in the child and playing with a standard set of toys. The questions and toys were selected, taking into account the child’s age, developmental characteristics, and speech skills. For the perceptual experiment by adults, test sequences containing speech material of 30 children: TD, DS, and ASD were created. The listeners were 100 adults (age 19.3 ± 2.3 years). The listeners were tasked with determining the children’s emotional state as “comfort – neutral – discomfort” while listening to the test material. Spectrographic analysis of speech signals was conducted. For automatic recognition of the emotional state, 6594 speech files containing speech material of children were prepared. Automatic recognition of three states, “comfort – neutral – discomfort,” was performed using automatically extracted from the set of acoustic features - the Geneva Minimalistic Acoustic Parameter Set (GeMAPS) and the extended Geneva Minimalistic Acoustic Parameter Set (eGeMAPS). The results showed that the emotional state is worse determined by the speech of TD children (comfort – 58% of correct answers, discomfort – 56%). Listeners better recognized discomfort in children with ASD and DS (78% of answers) than comfort (70% and 67%, respectively, for children with DS and ASD). The neutral state is better recognized by the speech of children with ASD (67%) than by the speech of children with DS (52%) and TD children (54%). According to the automatic recognition data using the acoustic feature set GeMAPSv01b, the accuracy of automatic recognition of emotional states for children with ASD is 0.687; children with DS – 0.725; TD children – 0.641. When using the acoustic feature set eGeMAPSv01b, the accuracy of automatic recognition of emotional states for children with ASD is 0.671; children with DS – 0.717; TD children – 0.631. The use of different models showed similar results, with better recognition of emotional states by the speech of children with DS than by the speech of children with ASD. The state of comfort is automatically determined better by the speech of TD children (precision – 0.546) and children with ASD (0.523), discomfort – children with DS (0.504). The data on the specificities of recognition by adults of the children’s emotional state by their speech may be used in recruitment for working with children with atypical development. Automatic recognition data can be used to create alternative communication systems and automatic human-computer interfaces for social-emotional learning. Acknowledgment: This work was financially supported by the Russian Science Foundation (project 18-18-00063).Keywords: autism spectrum disorders, automatic recognition of speech, child’s emotional speech, Down syndrome, perceptual experiment
Procedia PDF Downloads 186442 Diagenesis of the Permian Ecca Sandstones and Mudstones, in the Eastern Cape Province, South Africa: Implications for the Shale Gas Potential of the Karoo Basin
Authors: Temitope L. Baiyegunhi, Christopher Baiyegunhi, Kuiwu Liu, Oswald Gwavava
Abstract:
Diagenesis is the most important factor that affects or impact the reservoir property. Despite the fact that published data gives a vast amount of information on the geology, sedimentology and lithostratigraphy of the Ecca Group in the Karoo Basin of South Africa, little is known of the diagenesis of the potentially feasible shales and sandstones of the Ecca Group. The study aims to provide a general account of the diagenesis of sandstones and mudstone of the Ecca Group. Twenty-five diagenetic textures and structures are identified and grouped into three regimes or stages that include eogenesis, mesogenesis and telogenesis. Clay minerals are the most common cementing materials in the Ecca sandstones and mudstones. Smectite, kaolinite and illite are the major clay minerals that act as pore lining rims and pore-filling cement. Most of the clay minerals and detrital grains were seriously attacked and replaced by calcite. Calcite precipitates locally in pore spaces and partly or completely replaced feldspar and quartz grains, mostly at their margins. Precipitation of cements and formation of pyrite and authigenic minerals as well as little lithification occurred during the eogenesis. This regime was followed by mesogenesis which brought about an increase in tightness of grain packing, loss of pore spaces and thinning of beds due to weight of overlying sediments and selective dissolution of framework grains. Compaction, mineral overgrowths, mineral replacement, clay-mineral authigenesis, deformation and pressure solution structures occurred during mesogenesis. During rocks were uplifted, weathered and unroofed by erosion, this resulted in additional grain fracturing, decementation and oxidation of iron-rich volcanic fragments and ferromagnesian minerals. The rocks of Ecca Group were subjected to moderate-intense mechanical and chemical compaction during its progressive burial. Intergranular pores, matrix micro pores, secondary intragranular, dissolution and fractured pores are the observed pores. The presence of fractured and dissolution pores tend to enhance reservoir quality. However, the isolated nature of the pores makes them unfavourable producers of hydrocarbons, which at best would require stimulation. The understanding of the space and time distribution of diagenetic processes in these rocks will allow the development of predictive models of their quality, which may contribute to the reduction of risks involved in their exploration.Keywords: diagenesis, reservoir quality, Ecca Group, Karoo Supergroup
Procedia PDF Downloads 147441 Accuracy of Computed Tomography Dose Monitor Values: A Multicentric Study in India
Authors: Adhimoolam Saravana Kumar, K. N. Govindarajan, B. Devanand, R. Rajakumar
Abstract:
The quality of Computed Tomography (CT) procedures has improved in recent years due to technological developments and increased diagnostic ability of CT scanners. Due to the fact that CT doses are the peak among diagnostic radiology practices, it is of great significance to be aware of patient’s CT radiation dose whenever a CT examination is preferred. CT radiation dose delivered to patients in the form of volume CT dose index (CTDIvol) values, is displayed on scanner monitors at the end of each examination and it is an important fact to assure that this information is accurate. The objective of this study was to estimate the CTDIvol values for great number of patients during the most frequent CT examinations, to study the comparison between CT dose monitor values and measured ones, as well as to highlight the fluctuation of CTDIvol values for the same CT examination at different centres and scanner models. The output CT dose indices measurements were carried out on single and multislice scanners for available kV, 5 mm slice thickness, 100 mA and FOV combination used. The 100 CT scanners were involved in this study. Data with regard to 15,000 examinations in patients, who underwent routine head, chest and abdomen CT were collected using a questionnaire sent to a large number of hospitals. Out of the 15,000 examinations, 5000 were head CT examinations, 5000 were chest CT examinations and 5000 were abdominal CT examinations. Comprehensive quality assurance (QA) was performed for all the machines involved in this work. Followed by QA, CT phantom dose measurements were carried out in South India using actual scanning parameters used clinically by the hospitals. From this study, we have measured the mean divergence between the measured and displayed CTDIvol values were 5.2, 8.4, and -5.7 for selected head, chest and abdomen procedures for protocols as mentioned above, respectively. Thus, this investigation revealed an observable change in CT practices, with a much wider range of studies being performed currently in South India. This reflects the improved capacity of CT scanners to scan longer scan lengths and at finer resolutions as permitted by helical and multislice technology. Also, some of the CT scanners have used smaller slice thickness for routine CT procedures to achieve better resolution and image quality. It leads to an increase in the patient radiation dose as well as the measured CTDIv, so it is suggested that such CT scanners should select appropriate slice thickness and scanning parameters in order to reduce the patient dose. If these routine scan parameters for head, chest and abdomen procedures are optimized than the dose indices would be optimal and lead to the lowering of the CT doses. In South Indian region all the CT machines were routinely tested for QA once in a year as per AERB requirements.Keywords: CT dose index, weighted CTDI, volumetric CTDI, radiation dose
Procedia PDF Downloads 255440 An Exploratory Study in Nursing Education: Factors Influencing Nursing Students’ Acceptance of Mobile Learning
Authors: R. Abdulrahman, A. Eardley, A. Soliman
Abstract:
The proliferation in the development of mobile learning (m-learning) has played a vital role in the rapidly growing electronic learning market. This relatively new technology can help to encourage the development of in learning and to aid knowledge transfer a number of areas, by familiarizing students with innovative information and communications technologies (ICT). M-learning plays a substantial role in the deployment of learning methods for nursing students by using the Internet and portable devices to access learning resources ‘anytime and anywhere’. However, acceptance of m-learning by students is critical to the successful use of m-learning systems. Thus, there is a need to study the factors that influence student’s intention to use m-learning. This paper addresses this issue. It outlines the outcomes of a study that evaluates the unified theory of acceptance and use of technology (UTAUT) model as applied to the subject of user acceptance in relation to m-learning activity in nurse education. The model integrates the significant components across eight prominent user acceptance models. Therefore, a standard measure is introduced with core determinants of user behavioural intention. The research model extends the UTAUT in the context of m-learning acceptance by modifying and adding individual innovativeness (II) and quality of service (QoS) to the original structure of UTAUT. The paper goes on to add the factors of previous experience (of using mobile devices in similar applications) and the nursing students’ readiness (to use the technology) to influence their behavioural intentions to use m-learning. This study uses a technique called ‘convenience sampling’ which involves student volunteers as participants in order to collect numerical data. A quantitative method of data collection was selected and involves an online survey using a questionnaire form. This form contains 33 questions to measure the six constructs, using a 5-point Likert scale. A total of 42 respondents participated, all from the Nursing Institute at the Armed Forces Hospital in Saudi Arabia. The gathered data were then tested using a research model that employs the structural equation modelling (SEM), including confirmatory factor analysis (CFA). The results of the CFA show that the UTAUT model has the ability to predict student behavioural intention and to adapt m-learning activity to the specific learning activities. It also demonstrates satisfactory, dependable and valid scales of the model constructs. This suggests further analysis to confirm the model as a valuable instrument in order to evaluate the user acceptance of m-learning activity.Keywords: mobile learning, nursing institute students’ acceptance of m-learning activity in Saudi Arabia, unified theory of acceptance and use of technology model (UTAUT), structural equation modelling (SEM)
Procedia PDF Downloads 183439 Post-Soviet LULC Analysis of Tbilisi, Batumi and Kutaisi Using of Remote Sensing and Geo Information System
Authors: Lela Gadrani, Mariam Tsitsagi
Abstract:
Human is a part of the urban landscape and responsible for it. Urbanization of cities includes the longest phase; thus none of the environment ever undergoes such anthropogenic impact as the area of large cities. The post-Soviet period is very interesting in terms of scientific research. The changes that have occurred in the cities since the collapse of the Soviet Union have not yet been analyzed best to our knowledge. In this context, the aim of this paper is to analyze the changes in the land use of the three large cities of Georgia (Tbilisi, Kutaisi, Batumi). Tbilisi as a capital city, Batumi as a port city, and Kutaisi as a former industrial center. Data used during the research process are conventionally divided into satellite and supporting materials. For this purpose, the largest topographic maps (1:10 000) of all three cities were analyzed, Tbilisi General Plans (1896, 1924), Tbilisi and Kutaisi historical maps. The main emphasis was placed on the classification of Landsat images. In this case, we have classified the images LULC (LandUse / LandCover) of all three cities taken in 1987 and 2016 using the supervised and unsupervised methods. All the procedures were performed in the programs: Arc GIS 10.3.1 and ENVI 5.0. In each classification we have singled out the following classes: built-up area, water bodies, agricultural lands, green cover and bare soil, and calculated the areas occupied by them. In order to check the validity of the obtained results, additionally we used the higher resolution images of CORONA and Sentinel. Ultimately we identified the changes that took place in the land use in the post-Soviet period in the above cities. According to the results, a large wave of changes touched Tbilisi and Batumi, though in different periods. It turned out that in the case of Tbilisi, the area of developed territory has increased by 13.9% compared to the 1987 data, which is certainly happening at the expense of agricultural land and green cover, in particular, the area of agricultural lands has decreased by 4.97%; and the green cover by 5.67%. It should be noted that Batumi has obviously overtaken the country's capital in terms of development. With the unaided eye it is clear that in comparison with other regions of Georgia, everything is different in Batumi. In fact, Batumi is an unofficial summer capital of Georgia. Undoubtedly, Batumi’s development is very important both in economic and social terms. However, there is a danger that in the uneven conditions of urban development, we will eventually get a developed center - Batumi, and multiple underdeveloped peripheries around it. Analysis of the changes in the land use is of utmost importance not only for quantitative evaluation of the changes already implemented, but for future modeling and prognosis of urban development. Raster data containing the classes of land use is an integral part of the city's prognostic models.Keywords: analysis, geo information system, remote sensing, LULC
Procedia PDF Downloads 449438 Computerized Adaptive Testing for Ipsative Tests with Multidimensional Pairwise-Comparison Items
Authors: Wen-Chung Wang, Xue-Lan Qiu
Abstract:
Ipsative tests have been widely used in vocational and career counseling (e.g., the Jackson Vocational Interest Survey). Pairwise-comparison items are a typical item format of ipsative tests. When the two statements in a pairwise-comparison item measure two different constructs, the item is referred to as a multidimensional pairwise-comparison (MPC) item. A typical MPC item would be: Which activity do you prefer? (A) playing with young children, or (B) working with tools and machines. These two statements aim at the constructs of social interest and investigative interest, respectively. Recently, new item response theory (IRT) models for ipsative tests with MPC items have been developed. Among them, the Rasch ipsative model (RIM) deserves special attention because it has good measurement properties, in which the log-odds of preferring statement A to statement B are defined as a competition between two parts: the sum of a person’s latent trait to which statement A is measuring and statement A’s utility, and the sum of a person’s latent trait to which statement B is measuring and statement B’s utility. The RIM has been extended to polytomous responses, such as preferring statement A strongly, preferring statement A, preferring statement B, and preferring statement B strongly. To promote the new initiatives, in this study we developed computerized adaptive testing algorithms for MFC items and evaluated their performance using simulations and two real tests. Both the RIM and its polytomous extension are multidimensional, which calls for multidimensional computerized adaptive testing (MCAT). A particular issue in MCAT for MPC items is the within-person statement exposure (WPSE); that is, a respondent may keep seeing the same statement (e.g., my life is empty) for many times, which is certainly annoying. In this study, we implemented two methods to control the WPSE rate. In the first control method, items would be frozen when their statements had been administered more than a prespecified times. In the second control method, a random component was added to control the contribution of the information at different stages of MCAT. The second control method was found to outperform the first control method in our simulation studies. In addition, we investigated four item selection methods: (a) random selection (as a baseline), (b) maximum Fisher information method without WPSE control, (c) maximum Fisher information method with the first control method, and (d) maximum Fisher information method with the second control method. These four methods were applied to two real tests: one was a work survey with dichotomous MPC items and the other is a career interests survey with polytomous MPC items. There were three dependent variables: the bias and root mean square error across person measures, and measurement efficiency which was defined as the number of items needed to achieve the same degree of test reliability. Both applications indicated that the proposed MCAT algorithms were successful and there was no loss in measurement proficiency when the control methods were implemented, and among the four methods, the last method performed the best.Keywords: computerized adaptive testing, ipsative tests, item response theory, pairwise comparison
Procedia PDF Downloads 245437 Metacognitive Processing in Early Readers: The Role of Metacognition in Monitoring Linguistic and Non-Linguistic Performance and Regulating Students' Learning
Authors: Ioanna Taouki, Marie Lallier, David Soto
Abstract:
Metacognition refers to the capacity to reflect upon our own cognitive processes. Although there is an ongoing discussion in the literature on the role of metacognition in learning and academic achievement, little is known about its neurodevelopmental trajectories in early childhood, when children begin to receive formal education in reading. Here, we evaluate the metacognitive ability, estimated under a recently developed Signal Detection Theory model, of a cohort of children aged between 6 and 7 (N=60), who performed three two-alternative-forced-choice tasks (two linguistic: lexical decision task, visual attention span task, and one non-linguistic: emotion recognition task) including trial-by-trial confidence judgements. Our study has three aims. First, we investigated how metacognitive ability (i.e., how confidence ratings track accuracy in the task) relates to performance in general standardized tasks related to students' reading and general cognitive abilities using Spearman's and Bayesian correlation analysis. Second, we assessed whether or not young children recruit common mechanisms supporting metacognition across the different task domains or whether there is evidence for domain-specific metacognition at this early stage of development. This was done by examining correlations in metacognitive measures across different task domains and evaluating cross-task covariance by applying a hierarchical Bayesian model. Third, using robust linear regression and Bayesian regression models, we assessed whether metacognitive ability in this early stage is related to the longitudinal learning of children in a linguistic and a non-linguistic task. Notably, we did not observe any association between students’ reading skills and metacognitive processing in this early stage of reading acquisition. Some evidence consistent with domain-general metacognition was found, with significant positive correlations between metacognitive efficiency between lexical and emotion recognition tasks and substantial covariance indicated by the Bayesian model. However, no reliable correlations were found between metacognitive performance in the visual attention span and the remaining tasks. Remarkably, metacognitive ability significantly predicted children's learning in linguistic and non-linguistic domains a year later. These results suggest that metacognitive skill may be dissociated to some extent from general (i.e., language and attention) abilities and further stress the importance of creating educational programs that foster students’ metacognitive ability as a tool for long term learning. More research is crucial to understand whether these programs can enhance metacognitive ability as a transferable skill across distinct domains or whether unique domains should be targeted separately.Keywords: confidence ratings, development, metacognitive efficiency, reading acquisition
Procedia PDF Downloads 150