Search results for: task and ego orientation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3182

Search results for: task and ego orientation

92 A Corpus-based Study of Adjuncts in Colombian English as a Second Language (ESL) Argumentative Essays

Authors: E. Velasco

Abstract:

Meeting high standards of writing in a Second Language (L2) is extremely important for many students who wish to undertake studies at universities in both English and non-English speaking countries. University lecturers in English speaking countries continue to express dissatisfaction with the apparent poor quality of essay writing skills displayed by English as a Second Language (ESL) students, whose essays are often criticised for their lack of cohesion and coherence. These critiques have extended to contexts such as Colombia, where many ESL students are criticised for their inability to write high-quality academic texts in L2-English, particularly at the tertiary level. If Colombian ESL students are expected to meet high standards of writing when studying locally and abroad, it makes sense to carry out specific research that can perhaps lead to recommendations to support their quest for improving argumentative strategies. Employing Corpus Linguistics methods within a Learner Corpus Research framework, and a combination of Log-Likelihood and Bayes Factor measures, this paper investigated argumentative essays written by Colombian ESL students. The study specifically aimed to analyse conjunctive adjuncts in argumentative essays to find out how Colombian ESL students connect their ideas in discourse. Results suggest that a) Colombian ESL learners need explicit instruction on specific areas of conjunctive adjuncts to counteract overuse, underuse and misuse; b) underuse of endophoric and evidential adjuncts highlights gaps between IELTS-like essays and good quality tertiary-level essays and published papers, and these gaps are linked to prior knowledge brought into writing task, rhetorical functions in writing, and research processes before writing takes place; c) both Colombian ESL learners and L1-English writers (in a reference corpus) overuse some adjuncts and underuse endophoric and evidential adjuncts, when compared to skilled L1-English and L2-English writers, so differences in frequencies of adjuncts has little to do with the writers’ L1, and differences are rather linked to types of essays writers produce (e.g. ESL vs. university essays). Ender Velasco: The pedagogical recommendations deriving from the study are that: a) Colombian ESL learners need to be shown that overuse is not the only way of giving cohesion to argumentative essays and there are other alternatives to cohesion (e.g., implicit adjuncts, lexical chains and collocations); b) syllabi and classroom input need to raise awareness of gaps in writing skills between IELTS-like and tertiary-level argumentative essays, and of how endophoric and evidential adjuncts are used to refer to anaphoric and cataphoric sections of essays, and to other people’s work or ideas; c) syllabi and classroom input need to include essay-writing tasks based on previous research/reading which learners need to incorporate into their arguments, and tasks that raise awareness of referencing systems (e.g., APA); d) classroom input needs to include explicit instruction on use of punctuation, functions and/or syntax with specific conjunctive adjuncts such as for example, for that reason, although, despite and nevertheless.

Keywords: argumentative essays, colombian english as a second language (esl) learners, conjunctive adjuncts, corpus linguistics

Procedia PDF Downloads 85
91 Assessment of Neurodevelopmental Needs in Duchenne Muscular Dystrophy

Authors: Mathula Thangarajh

Abstract:

Duchenne muscular dystrophy (DMD) is a severe form of X-linked muscular dystrophy caused by mutations in the dystrophin gene resulting in progressive skeletal muscle weakness. Boys with DMD also have significant cognitive disabilities. The intelligence quotient of boys with DMD, compared to peers, is approximately one standard deviation below average. Detailed neuropsychological testing has demonstrated that boys with DMD have a global developmental impairment, with verbal memory and visuospatial skills most significantly affected. Furthermore, the total brain volume and gray matter volume are lower in children with DMD compared to age-matched controls. These results are suggestive of a significant structural and functional compromise to the developing brain as a result of absent dystrophin protein expression. There is also some genetic evidence to suggest that mutations in the 3’ end of the DMD gene are associated with more severe neurocognitive problems. Our working hypothesis is that (i) boys with DMD do not make gains in neurodevelopmental skills compared to typically developing children and (ii) women carriers of DMD mutations may have subclinical cognitive deficits. We also hypothesize that there may be an intergenerational vulnerability of cognition, with boys of DMD-carrier mothers being more affected cognitively than boys of non-DMD-carrier mothers. The objectives of this study are: 1. Assess the neurodevelopment in boys with DMD at 4-time points and perform baseline neuroradiological assessment, 2. Assess cognition in biological mothers of DMD participants at baseline, 3. Assess possible correlation between DMD mutation and cognitive measures. This study also explores functional brain abnormalities in people with DMD by exploring how regional and global connectivity of the brain underlies executive function deficits in DMD. Such research can contribute to a better holistic understanding of the cognition alterations due to DMD and could potentially allow clinicians to create better-tailored treatment plans for the DMD population. There are four study visits for each participant (baseline, 2-4 weeks, 1 year, 18 months). At each visit, the participant completes the NIH Toolbox Cognition Battery, a validated psychometric measure that is recommended by NIH Common Data Elements for use in DMD. Visits 1, 3, and 4 also involve the administration of the BRIEF-2, ABAS-3, PROMIS/NeuroQoL, PedsQL Neuromuscular module 3.0, Draw a Clock Test, and an optional fMRI scan with the N-back matching task. We expect to enroll 52 children with DMD, 52 mothers of children with DMD, and 30 healthy control boys. This study began in 2020 during the height of the COVID-19 pandemic. Due to this, there were subsequent delays in recruitment because of travel restrictions. However, we have persevered and continued to recruit new participants for the study. We partnered with the Muscular Dystrophy Association (MDA) and helped advertise the study to interested families. Since then, we have had families from across the country contact us about their interest in the study. We plan to continue to enroll a diverse population of DMD participants to contribute toward a better understanding of Duchenne Muscular Dystrophy.

Keywords: neurology, Duchenne muscular dystrophy, muscular dystrophy, cognition, neurodevelopment, x-linked disorder, DMD, DMD gene

Procedia PDF Downloads 99
90 Solutions for Food-Safe 3D Printing

Authors: Geremew Geidare Kailo, Igor Gáspár, András Koris, Ivana Pajčin, Flóra Vitális, Vanja Vlajkov

Abstract:

Three-dimension (3D) printing, a very popular additive manufacturing technology, has recently undergone rapid growth and replaced the use of conventional technology from prototyping to producing end-user parts and products. The 3D Printing technology involves a digital manufacturing machine that produces three-dimensional objects according to designs created by the user via 3D modeling or computer-aided design/manufacturing (CAD/CAM) software. The most popular 3D printing system is Fused Deposition Modeling (FDM) or also called Fused Filament Fabrication (FFF). A 3D-printed object is considered food safe if it can have direct contact with the food without any toxic effects, even after cleaning, storing, and reusing the object. This work analyzes the processing timeline of the filament (material for 3D printing) from unboxing to the extrusion through the nozzle. It is an important task to analyze the growth of bacteria on the 3D printed surface and in gaps between the layers. By default, the 3D-printed object is not food safe after longer usage and direct contact with food (even though they use food-safe filaments), but there are solutions for this problem. The aim of this work was to evaluate the 3D-printed object from different perspectives of food safety. Firstly, testing antimicrobial 3D printing filaments from a food safety aspect since the 3D Printed object in the food industry may have direct contact with the food. Therefore, the main purpose of the work is to reduce the microbial load on the surface of a 3D-printed part. Coating with epoxy resin was investigated, too, to see its effect on mechanical strength, thermal resistance, surface smoothness and food safety (cleanability). Another aim of this study was to test new temperature-resistant filaments and the effect of high temperature on 3D printed materials to see if they can be cleaned with boiling or similar hi-temp treatment. This work proved that all three mentioned methods could improve the food safety of the 3D printed object, but the size of this effect variates. The best result we got was with coating with epoxy resin, and the object was cleanable like any other injection molded plastic object with a smooth surface. Very good results we got by boiling the objects, and it is good to see that nowadays, more and more special filaments have a food-safe certificate and can withstand boiling temperatures too. Using antibacterial filaments reduced bacterial colonies to 1/5, but the biggest advantage of this method is that it doesn’t require any post-processing. The object is ready out of the 3D printer. Acknowledgements: The research was supported by the Hungarian and Serbian bilateral scientific and technological cooperation project funded by the Hungarian National Office for Research, Development and Innovation (NKFI, 2019-2.1.11-TÉT-2020-00249) and the Ministry of Education, Science and Technological Development of the Republic of Serbia. The authors acknowledge the Hungarian University of Agriculture and Life Sciences’s Doctoral School of Food Science for the support in this study

Keywords: food safety, 3D printing, filaments, microbial, temperature

Procedia PDF Downloads 142
89 Strategic Planning Practice in a Global Perspective:the Case of Guangzhou, China

Authors: Shuyi Xie

Abstract:

As the vital city in south China since the ancient time, Guangzhou has been losing its leading role among the rising neighboring cities, especially, Hong Kong and Shenzhen, since the late 1980s, with the overloaded infrastructure and deteriorating urban environment in its old inner city. Fortunately, with the new expansion of its administrative area in 2000, the local municipality considered it as a great opportunity to solve a series of alarming urban problems. Thus, for the first time, strategic planning was introduced to China for providing more convincing and scientific basis towards better urban future. Differed from traditional Chinese planning practices, which rigidly and dogmatically focused on future blueprints, the strategic planning of Guangzhou proceeded from analyzing practical challenges and opportunities towards establishing reasonable developing objectives and proposing corresponding strategies. Moreover, it was pioneering that the municipality invited five planning institutions for proposals, among which, the paper focuses on the one proposed by China Academy of Urban Planning & Design from its theoretical basis to problems’ defining and analyzing the process, as well as planning results. Since it was closer to the following municipal decisions and had a more far-reaching influence for other Chinese cities' following practices. In particular, it demonstrated an innovative exploration on the role played by urban developing rate on deciding urban growth patterns (‘Spillover-reverberation’ or ‘Leapfrog’). That ultimately established an unprecedented paradigm on deciding an appropriate urban spatial structure in future, including its specific location, function and scale. Besides the proposal itself, this article highlights the role of interactions, among actors, as well as proposals, subsequent discussions, summaries and municipal decisions, especially the establishment of the rolling dynamic evaluation system for periodical reviews on implementation situations, as the first attempt in China. Undoubtedly, strategic planning of Guangzhou has brought out considerable benefits, especially opening the strategic mind for plentiful Chinese cities in the following years through establishing a flexible and dynamic planning mechanism highlighted the interactions among multiple actors with innovative and effective tools, methodologies and perspectives on regional, objective-approach and comparative analysis. However, compared with some developed countries, the strategic planning in China just started and has been greatly relied on empirical studies rather than scientific analysis. Moreover, it still faced a bit of controversy, for instance, the certain gap among institutional proposals, final municipal decisions and implemented results, due to the lacking legal constraint. Also, how to improve the public involvement in China with an absolute up-down administrative system is another urgent task. In future, despite of irresistible and irretrievable weakness, some experiences and lessons from previous international practices, with the combination of specific Chinese situations and domestic practices, would enable to promote the further advance on strategic planning in China.

Keywords: evaluation system, global perspective, Guangzhou, interactions, strategic planning, urban growth patterns

Procedia PDF Downloads 390
88 Assessing the Environmental Efficiency of China’s Power System: A Spatial Network Data Envelopment Analysis Approach

Authors: Jianli Jiang, Bai-Chen Xie

Abstract:

The climate issue has aroused global concern. Achieving sustainable development is a good path for countries to mitigate environmental and climatic pressures, although there are many difficulties. The first step towards sustainable development is to evaluate the environmental efficiency of the energy industry with proper methods. The power sector is a major source of CO2, SO2, and NOx emissions. Evaluating the environmental efficiency (EE) of power systems is the premise to alleviate the terrible situation of energy and the environment. Data Envelopment Analysis (DEA) has been widely used in efficiency studies. However, measuring the efficiency of a system (be it a nation, region, sector, or business) is a challenging task. The classic DEA takes the decision-making units (DMUs) as independent, which neglects the interaction between DMUs. While ignoring these inter-regional links may result in a systematic bias in the efficiency analysis; for instance, the renewable power generated in a certain region may benefit the adjacent regions while the SO2 and CO2 emissions act oppositely. This study proposes a spatial network DEA (SNDEA) with a slack measure that can capture the spatial spillover effects of inputs/outputs among DMUs to measure efficiency. This approach is used to study the EE of China's power system, which consists of generation, transmission, and distribution departments, using a panel dataset from 2014 to 2020. In the empirical example, the energy and patent inputs, the undesirable CO2 output, and the renewable energy (RE) power variables are tested for a significant spatial spillover effect. Compared with the classic network DEA, the SNDEA result shows an obvious difference tested by the global Moran' I index. From a dynamic perspective, the EE of the power system experiences a visible surge from 2015, then a sharp downtrend from 2019, which keeps the same trend with the power transmission department. This phenomenon benefits from the market-oriented reform in the Chinese power grid enacted in 2015. The rapid decline in the environmental efficiency of the transmission department in 2020 was mainly due to the Covid-19 epidemic, which hinders economic development seriously. While the EE of the power generation department witnesses a declining trend overall, this is reasonable, taking the RE power into consideration. The installed capacity of RE power in 2020 is 4.40 times that in 2014, while the power generation is 3.97 times; in other words, the power generation per installed capacity shrank. In addition, the consumption cost of renewable power increases rapidly with the increase of RE power generation. These two aspects make the EE of the power generation department show a declining trend. Incorporation of the interactions among inputs/outputs into the DEA model, this paper proposes an efficiency evaluation method on the basis of the DEA framework, which sheds some light on efficiency evaluation in regional studies. Furthermore, the SNDEA model and the spatial DEA concept can be extended to other fields, such as industry, country, and so on.

Keywords: spatial network DEA, environmental efficiency, sustainable development, power system

Procedia PDF Downloads 109
87 Physical Activity Based on Daily Step-Count in Inpatient Setting in Stroke and Traumatic Brain Injury Patients in Subacute Stage Follow Up: A Cross-Sectional Observational Study

Authors: Brigitte Mischler, Marget Hund, Hilfiker Roger, Clare Maguire

Abstract:

Background: Brain injury is one of the main causes of permanent physical disability, and improving walking ability is one of the most important goals for patients. After inpatient rehabilitation, most do not receive long-term rehabilitation services. Physical activity is important for the health prevention of the musculoskeletal system, circulatory system and the psyche. Objective: This follow-up study measured physical activity in subacute patients after traumatic brain injury and stroke. The difference in the number of steps in the inpatient setting was compared to the number of steps 1 year after the event in the outpatient setting. Methods: This follow-up study is a cross-sectional observational study with 29 participants. The measurement of daily step count over a seven-day period one year after the event was evaluated with the StepWatch™ ankle sensor. The number of steps taken one year after the event in the outpatient setting was compared with the number of steps taken during the inpatient stay and evaluated if they reached the recommended target value. Correlations between steps-count and exit domain, FAC level, walking speed, light touch, joint position sense, cognition, and fear of falling were calculated. Results: The median (IQR) daily step count of all patients was 2512 (568.5, 4070.5). During follow-up, the number of steps improved to 3656(1710,5900). The average difference was 1159(-2825, 6840) steps per day. Participants who were unable to walk independently (FAC 1) improved from 336(5-705) to 1808(92, 5354) steps per day. Participants able to walk with assistance (FAC 2-3) walked 700(31-3080) and at follow-up 3528(243,6871). Independent walkers (FAC 4-5) walked 4093(2327-5868) and achieved 3878(777,7418) daily steps at follow-up. This value is significantly below the recommended guideline. Step-count at follow-up showed moderate to high and statistically significant correlations: positive for FAC score, positive for FIM total score, positive for walking speed, and negative for fear of falling. Conclusions: Only 17% of all participants achieved the recommended daily step count one year after the event. We need better inpatient and outpatient strategies to improve physical activity. In everyday clinical practice, pedometers and diaries with objectives should be used. A concrete weekly schedule should be drawn up together with the patient, relatives, or nursing staff after discharge. This should include daily self-training, which was instructed during the inpatient stay. A good connection to social life (professional connection or a daily task/activity) can be an important part of improving daily activity. Further research should evaluate strategies to increase daily step counts in inpatient settings as well as in outpatient settings.

Keywords: neurorehabilitation, stroke, traumatic brain injury, steps, stepcount

Procedia PDF Downloads 15
86 Analytical Technique for Definition of Internal Forces in Links of Robotic Systems and Mechanisms with Statically Indeterminate and Determinate Structures Taking into Account the Distributed Dynamical Loads and Concentrated Forces

Authors: Saltanat Zhilkibayeva, Muratulla Utenov, Nurzhan Utenov

Abstract:

The distributed inertia forces of complex nature appear in links of rod mechanisms within the motion process. Such loads raise a number of problems, as the problems of destruction caused by a large force of inertia; elastic deformation of the mechanism can be considerable, that can bring the mechanism out of action. In this work, a new analytical approach for the definition of internal forces in links of robotic systems and mechanisms with statically indeterminate and determinate structures taking into account the distributed inertial and concentrated forces is proposed. The relations between the intensity of distributed inertia forces and link weight with geometrical, physical and kinematic characteristics are determined in this work. The distribution laws of inertia forces and dead weight make it possible at each position of links to deduce the laws of distribution of internal forces along the axis of the link, in which loads are found at any point of the link. The approximation matrixes of forces of an element under the action of distributed inertia loads with the trapezoidal intensity are defined. The obtained approximation matrixes establish the dependence between the force vector in any cross-section of the element and the force vector in calculated cross-sections, as well as allow defining the physical characteristics of the element, i.e., compliance matrix of discrete elements. Hence, the compliance matrixes of an element under the action of distributed inertial loads of trapezoidal shape along the axis of the element are determined. The internal loads of each continual link are unambiguously determined by a set of internal loads in its separate cross-sections and by the approximation matrixes. Therefore, the task is reduced to the calculation of internal forces in a final number of cross-sections of elements. Consequently, it leads to a discrete model of elastic calculation of links of rod mechanisms. The discrete model of the elements of mechanisms and robotic systems and their discrete model as a whole are constructed. The dynamic equilibrium equations for the discrete model of the elements are also received in this work as well as the equilibrium equations of the pin and rigid joints expressed through required parameters of internal forces. Obtained systems of dynamic equilibrium equations are sufficient for the definition of internal forces in links of mechanisms, which structure is statically definable. For determination of internal forces of statically indeterminate mechanisms (in the way of determination of internal forces), it is necessary to build a compliance matrix for the entire discrete model of the rod mechanism, that is reached in this work. As a result by means of developed technique the programs in the MAPLE18 system are made and animations of the motion of the fourth class mechanisms of statically determinate and statically indeterminate structures with construction on links the intensity of cross and axial distributed inertial loads, the bending moments, cross and axial forces, depending on kinematic characteristics of links are obtained.

Keywords: distributed inertial forces, internal forces, statically determinate mechanisms, statically indeterminate mechanisms

Procedia PDF Downloads 217
85 Automatic Adult Age Estimation Using Deep Learning of the ResNeXt Model Based on CT Reconstruction Images of the Costal Cartilage

Authors: Ting Lu, Ya-Ru Diao, Fei Fan, Ye Xue, Lei Shi, Xian-e Tang, Meng-jun Zhan, Zhen-hua Deng

Abstract:

Accurate adult age estimation (AAE) is a significant and challenging task in forensic and archeology fields. Attempts have been made to explore optimal adult age metrics, and the rib is considered a potential age marker. The traditional way is to extract age-related features designed by experts from macroscopic or radiological images followed by classification or regression analysis. Those results still have not met the high-level requirements for practice, and the limitation of using feature design and manual extraction methods is loss of information since the features are likely not designed explicitly for extracting information relevant to age. Deep learning (DL) has recently garnered much interest in imaging learning and computer vision. It enables learning features that are important without a prior bias or hypothesis and could be supportive of AAE. This study aimed to develop DL models for AAE based on CT images and compare their performance to the manual visual scoring method. Chest CT data were reconstructed using volume rendering (VR). Retrospective data of 2500 patients aged 20.00-69.99 years were obtained between December 2019 and September 2021. Five-fold cross-validation was performed, and datasets were randomly split into training and validation sets in a 4:1 ratio for each fold. Before feeding the inputs into networks, all images were augmented with random rotation and vertical flip, normalized, and resized to 224×224 pixels. ResNeXt was chosen as the DL baseline due to its advantages of higher efficiency and accuracy in image classification. Mean absolute error (MAE) was the primary parameter. Independent data from 100 patients acquired between March and April 2022 were used as a test set. The manual method completely followed the prior study, which reported the lowest MAEs (5.31 in males and 6.72 in females) among similar studies. CT data and VR images were used. The radiation density of the first costal cartilage was recorded using CT data on the workstation. The osseous and calcified projections of the 1 to 7 costal cartilages were scored based on VR images using an eight-stage staging technique. According to the results of the prior study, the optimal models were the decision tree regression model in males and the stepwise multiple linear regression equation in females. Predicted ages of the test set were calculated separately using different models by sex. A total of 2600 patients (training and validation sets, mean age=45.19 years±14.20 [SD]; test set, mean age=46.57±9.66) were evaluated in this study. Of ResNeXt model training, MAEs were obtained with 3.95 in males and 3.65 in females. Based on the test set, DL achieved MAEs of 4.05 in males and 4.54 in females, which were far better than the MAEs of 8.90 and 6.42 respectively, for the manual method. Those results showed that the DL of the ResNeXt model outperformed the manual method in AAE based on CT reconstruction of the costal cartilage and the developed system may be a supportive tool for AAE.

Keywords: forensic anthropology, age determination by the skeleton, costal cartilage, CT, deep learning

Procedia PDF Downloads 73
84 Field-Testing a Digital Music Notebook

Authors: Rena Upitis, Philip C. Abrami, Karen Boese

Abstract:

The success of one-on-one music study relies heavily on the ability of the teacher to provide sufficient direction to students during weekly lessons so that they can successfully practice from one lesson to the next. Traditionally, these instructions are given in a paper notebook, where the teacher makes notes for the students after describing a task or demonstrating a technique. The ability of students to make sense of these notes varies according to their understanding of the teacher’s directions, their motivation to practice, their memory of the lesson, and their abilities to self-regulate. At best, the notes enable the student to progress successfully. At worst, the student is left rudderless until the next lesson takes place. Digital notebooks have the potential to provide a more interactive and effective bridge between music lessons than traditional pen-and-paper notebooks. One such digital notebook, Cadenza, was designed to streamline and improve teachers’ instruction, to enhance student practicing, and to provide the means for teachers and students to communicate between lessons. For example, Cadenza contains a video annotator, where teachers can offer real-time guidance on uploaded student performances. Using the checklist feature, teachers and students negotiate the frequency and type of practice during the lesson, which the student can then access during subsequent practice sessions. Following the tenets of self-regulated learning, goal setting and reflection are also featured. Accordingly, the present paper addressed the following research questions: (1) How does the use of the Cadenza digital music notebook engage students and their teachers?, (2) Which features of Cadenza are most successful?, (3) Which features could be improved?, and (4) Is student learning and motivation enhanced with the use of the Cadenza digital music notebook? The paper describes the results 10 months of field-testing of Cadenza, structured around the four research questions outlined. Six teachers and 65 students took part in the study. Data were collected through video-recorded lesson observations, digital screen captures, surveys, and interviews. Standard qualitative protocols for coding results and identifying themes were employed to analyze the results. The results consistently indicated that teachers and students embraced the digital platform offered by Cadenza. The practice log and timer, the real-time annotation tool, the checklists, the lesson summaries, and the commenting features were found to be the most valuable functions, by students and teachers alike. Teachers also reported that students progressed more quickly with Cadenza, and received higher results in examinations than those students who were not using Cadenza. Teachers identified modifications to Cadenza that would make it an even more powerful way to support student learning. These modifications, once implemented, will move the tool well past its traditional notebook uses to new ways of motivating students to practise between lessons and to communicate with teachers about their learning. Improvements to the tool called for by the teachers included the ability to duplicate archived lessons, allowing for split screen viewing, and adding goal setting to the teacher window. In the concluding section, proposed modifications and their implications for self-regulated learning are discussed.

Keywords: digital music technologies, electronic notebooks, self-regulated learning, studio music instruction

Procedia PDF Downloads 254
83 Opportunities and Challenges: Tracing the Evolution of India's First State-led Curriculum-based Media Literacy Intervention

Authors: Ayush Aditya

Abstract:

In today's digitised world, the extent of an individual’s social involvement is largely determined by their interaction over the internet. The Internet has emerged as a primary source of information consumption and a reliable medium for receiving updates on everyday activities. Owing to this change in the information consumption pattern, the internet has also emerged as a hotbed of misinformation. Experts are of the view that media literacy has emerged as one of the most effective strategies for addressing the issue of misinformation. This paper aims to study the evolution of the Kerala government's media literacy policy, its implementation strategy, challenges and opportunities. The objective of this paper is to create a conceptual framework containing details of the implementation strategy based on the Kerala model. Extensive secondary research of literature, newspaper articles, and other online sources was carried out to locate the timeline of this policy. This was followed by semi-structured interview discussions with government officials from Kerala to trace the origin and evolution of this policy. Preliminary findings based on the collected data suggest that this policy is a case of policy by chance, as the officer who headed this policy during the state level implementation was the one who has already piloted a media literacy program in a district called Kannur as the district collector. Through this paper, an attempt is made to trace the history of the media literacy policy starting from the Kannur intervention in 2018, which was started to address the issue of vaccine hesitancy around measles rubella(MR) vaccination. If not for the vaccine hesitancy, this program would not have been rolled out in Kannur. Interviews with government officials suggest that when authorities decided to take up this initiative in 2020, a huge amount of misinformation emerging during the COVID-19 pandemic was the trigger. There was misinformation regarding government orders, healthcare facilities, vaccination, and lockdown regulations, which affected everyone, unlike the case of Kannur, where it was only a certain age group of kids. As a solution to this problem, the state government decided to create a media literacy curriculum to be taught in all government schools of the state starting from standard 8 till graduation. This was a tricky task, as a new course had to be immediately introduced in the school curriculum amid all the disruptions in the education system caused by the pandemic. It was revealed during the interview that in the case of the state-wide implementation, every step involved multiple checks and balances, unlike the earlier program where stakeholders were roped-in as and when the need emerged. On the pedagogy, while the training during the pilot could be managed through PowerPoint presentation, designing a state-wide curriculum involved multiple iterations and expert approvals. The reason for this is COVID-19 related misinformation has lost its significance. In the next phase of the research, an attempt will be made to compare other aspects of the pilot implementation with the state-wide implementation.

Keywords: media literacy, digital media literacy, curriculum based media literacy intervention, misinformation

Procedia PDF Downloads 93
82 Cobb Angle Measurement from Coronal X-Rays Using Artificial Neural Networks

Authors: Andrew N. Saylor, James R. Peters

Abstract:

Scoliosis is a complex 3D deformity of the thoracic and lumbar spines, clinically diagnosed by measurement of a Cobb angle of 10 degrees or more on a coronal X-ray. The Cobb angle is the angle made by the lines drawn along the proximal and distal endplates of the respective proximal and distal vertebrae comprising the curve. Traditionally, Cobb angles are measured manually using either a marker, straight edge, and protractor or image measurement software. The task of measuring the Cobb angle can also be represented by a function taking the spine geometry rendered using X-ray imaging as input and returning the approximate angle. Although the form of such a function may be unknown, it can be approximated using artificial neural networks (ANNs). The performance of ANNs is affected by many factors, including the choice of activation function and network architecture; however, the effects of these parameters on the accuracy of scoliotic deformity measurements are poorly understood. Therefore, the objective of this study was to systematically investigate the effect of ANN architecture and activation function on Cobb angle measurement from the coronal X-rays of scoliotic subjects. The data set for this study consisted of 609 coronal chest X-rays of scoliotic subjects divided into 481 training images and 128 test images. These data, which included labeled Cobb angle measurements, were obtained from the SpineWeb online database. In order to normalize the input data, each image was resized using bi-linear interpolation to a size of 500 × 187 pixels, and the pixel intensities were scaled to be between 0 and 1. A fully connected (dense) ANN with a fixed cost function (mean squared error), batch size (10), and learning rate (0.01) was developed using Python Version 3.7.3 and TensorFlow 1.13.1. The activation functions (sigmoid, hyperbolic tangent [tanh], or rectified linear units [ReLU]), number of hidden layers (1, 3, 5, or 10), and number of neurons per layer (10, 100, or 1000) were varied systematically to generate a total of 36 network conditions. Stochastic gradient descent with early stopping was used to train each network. Three trials were run per condition, and the final mean squared errors and mean absolute errors were averaged to quantify the network response for each condition. The network that performed the best used ReLU neurons had three hidden layers, and 100 neurons per layer. The average mean squared error of this network was 222.28 ± 30 degrees2, and the average mean absolute error was 11.96 ± 0.64 degrees. It is also notable that while most of the networks performed similarly, the networks using ReLU neurons, 10 hidden layers, and 1000 neurons per layer, and those using Tanh neurons, one hidden layer, and 10 neurons per layer performed markedly worse with average mean squared errors greater than 400 degrees2 and average mean absolute errors greater than 16 degrees. From the results of this study, it can be seen that the choice of ANN architecture and activation function has a clear impact on Cobb angle inference from coronal X-rays of scoliotic subjects.

Keywords: scoliosis, artificial neural networks, cobb angle, medical imaging

Procedia PDF Downloads 129
81 Phonological Processing and Its Role in Pseudo-Word Decoding in Children Learning to Read Kannada Language between 5.6 to 8.6 Years

Authors: Vangmayee. V. Subban, Somashekara H. S, Shwetha Prabhu, Jayashree S. Bhat

Abstract:

Introduction and Need: Phonological processing is critical in learning to read alphabetical and non-alphabetical languages. However, its role in learning to read Kannada an alphasyllabary is equivocal. The literature has focused on the developmental role of phonological awareness on reading. To the best of authors knowledge, the role of phonological memory and phonological naming has not been addressed in alphasyllabary Kannada language. Therefore, there is a need to evaluate the comprehensive role of the phonological processing skills in Kannada on word decoding skills during the early years of schooling. Aim and Objectives: The present study aimed to explore the phonological processing abilities and their role in learning to decode pseudowords in children learning to read the Kannada language during initial years of formal schooling between 5.6 to 8.6 years. Method: In this cross sectional study, 60 typically developing Kannada speaking children, 20 each from Grade I, Grade II, and Grade III between the age range of 5.6 to 6.6 years, 6.7 to 7.6 years and 7.7 to 8.6 years respectively were selected from Kannada medium schools. Phonological processing abilities were assessed using an assessment tool specifically developed to address the objectives of the present research. The assessment tool was content validated by subject experts and had good inter and intra-subject reliability. Phonological awareness was assessed at syllable level using syllable segmentation, blending, and syllable stripping at initial, medial and final position. Phonological memory was assessed using pseudoword repetition task and phonological naming was assessed using rapid automatized naming of objects. Both phonological awareneness and phonological memory measures were scored for the accuracy of the response, whereas Rapid Automatized Naming (RAN) was scored for total naming speed. Results: The mean scores comparison using one-way ANOVA revealed a significant difference (p ≤ 0.05) between the groups on all the measures of phonological awareness, pseudoword repetition, rapid automatized naming, and pseudoword reading. Subsequent post-hoc grade wise comparison using Bonferroni test revealed significant differences (p ≤ 0.05) between each of the grades for all the tasks except (p ≥ 0.05) for syllable blending, syllable stripping, and pseudoword repetition between Grade II and Grade III. The Pearson correlations revealed a highly significant positive correlation (p=0.000) between all the variables except phonological naming which had significant negative correlations. However, the correlation co-efficient was higher for phonological awareness measures compared to others. Hence, phonological awareness was chosen a first independent variable to enter in the hierarchical regression equation followed by rapid automatized naming and finally, pseudoword repetition. The regression analysis revealed syllable awareness as a single most significant predictor of pseudoword reading by explaining the unique variance of 74% and there was no significant change in R² when RAN and pseudoword repetition were added subsequently to the regression equation. Conclusion: Present study concluded that syllable awareness matures completely by Grade II, whereas the phonological memory and phonological naming continue to develop beyond Grade III. Amongst phonological processing skills, phonological awareness, especially syllable awareness is crucial for word decoding than phonological memory and naming during initial years of schooling.

Keywords: phonological awareness, phonological memory, phonological naming, phonological processing, pseudo-word decoding

Procedia PDF Downloads 175
80 Moral Decision-Making in the Criminal Justice System: The Influence of Gruesome Descriptions

Authors: Michel Patiño-Sáenz, Martín Haissiner, Jorge Martínez-Cotrina, Daniel Pastor, Hernando Santamaría-García, Maria-Alejandra Tangarife, Agustin Ibáñez, Sandra Baez

Abstract:

It has been shown that gruesome descriptions of harm can increase the punishment given to a transgressor. This biasing effect is mediated by negative emotions, which are elicited upon the presentation of gruesome descriptions. However, there is a lack of studies inquiring the influence of such descriptions on moral decision-making in people involved in the criminal justice system. Such populations are of special interest since they have experience dealing with gruesome evidence, but also formal education on how to assess evidence and gauge the appropriate punishment according to the law. Likewise, they are expected to be objective and rational when performing their duty, because their decisions can impact profoundly people`s lives. Considering these antecedents, the objective of this study was to explore the influence gruesome written descriptions on moral decision-making in this group of people. To that end, we recruited attorneys, judges and public prosecutors (Criminal justice group, CJ, n=30) whose field of specialty is criminal law. In addition, we included a control group of people who did not have a formal education in law (n=30), but who were paired in age and years of education with the CJ group. All participants completed an online, Spanish-adapted version of a moral decision-making task, which was previously reported in the literature and also standardized and validated in the Latin-American context. A series of text-based stories describing two characters, one inflicting harm on the other, were presented to participants. Transgressor's intentionality (accidental vs. intentional harm) and language (gruesome vs. plain) used to describe harm were manipulated employing a within-subjects and a between-subjects design, respectively. After reading each story, participants were asked to rate (a) the harmful action's moral adequacy, (b) the amount of punishment deserving the transgressor and (c) how damaging was his behavior. Results showed main effects of group, intentionality and type of language on all dependent measures. In both groups, intentional harmful actions were rated as significantly less morally adequate, were punished more severely and were deemed as more damaging. Moreover, control subjects deemed more damaging and punished more severely any type of action than the CJ group. In addition, there was an interaction between intentionality and group. People in the control group rated harmful actions as less morally adequate than the CJ group, but only when the action was accidental. Also, there was an interaction between intentionality and language on punishment ratings. Controls punished more when harm was described using gruesome language. However, that was not the case of people in the CJ group, who assigned the same amount of punishment in both conditions. In conclusion, participants with job experience in the criminal justice system or criminal law differ in the way they make moral decisions. Particularly, it seems that they are less sensitive to the biasing effect of gruesome evidence, which is probably explained by their formal education or their experience in dealing with such evidence. Nonetheless, more studies are needed to determine the impact this phenomenon has on the fulfillment of their duty.

Keywords: criminal justice system, emotions, gruesome descriptions, intentionality, moral decision-making

Procedia PDF Downloads 188
79 The Effects of Aging on Visuomotor Behaviors in Reaching

Authors: Mengjiao Fan, Thomson W. L. Wong

Abstract:

It is unavoidable that older adults may have to deal with aging-related motor problems. Aging is highly likely to affect motor learning and control as well. For example, older adults may suffer from poor motor function and quality of life due to age-related eye changes. These adverse changes in vision results in impairment of movement automaticity. Reaching is a fundamental component of various complex movements, which is therefore beneficial to explore the changes and adaptation in visuomotor behaviors. The current study aims to explore how aging affects visuomotor behaviors by comparing motor performance and gaze behaviors between two age groups (i.e., young and older adults). Visuomotor behaviors in reaching under providing or blocking online visual feedback (simulated visual deficiency) conditions were investigated in 60 healthy young adults (Mean age=24.49 years, SD=2.12) and 37 older adults (Mean age=70.07 years, SD=2.37) with normal or corrected-to-normal vision. Participants in each group were randomly allocated into two subgroups. Subgroup 1 was provided with online visual feedback of the hand-controlled mouse cursor. However, in subgroup 2, visual feedback was blocked to simulate visual deficiency. The experimental task required participants to complete 20 times of reaching to a target by controlling the mouse cursor on the computer screen. Among all the 20 trials, start position was upright in the center of the screen and target appeared at a randomly selected position by the tailor-made computer program. Primary outcomes of motor performance and gaze behaviours data were recorded by the EyeLink II (SR Research, Canada). The results suggested that aging seems to affect the performance of reaching tasks significantly in both visual feedback conditions. In both age groups, blocking online visual feedback of the cursor in reaching resulted in longer hand movement time (p < .001), longer reaching distance away from the target center (p<.001) and poorer reaching motor accuracy (p < .001). Concerning gaze behaviors, blocking online visual feedback increased the first fixation duration time in young adults (p<.001) but decreased it in older adults (p < .001). Besides, under the condition of providing online visual feedback of the cursor, older adults conducted a longer fixation dwell time on target throughout reaching than the young adults (p < .001) although the effect was not significant under blocking online visual feedback condition (p=.215). Therefore, the results suggested that different levels of visual feedback during movement execution can affect gaze behaviors differently in older and young adults. Differential effects by aging on visuomotor behaviors appear on two visual feedback patterns (i.e., blocking or providing online visual feedback of hand-controlled cursor in reaching). Several specific gaze behaviors among the older adults were found, which imply that blocking of visual feedback may act as a stimulus to seduce extra perceptive load in movement execution and age-related visual degeneration might further deteriorate the situation. It indeed provides us with insight for the future development of potential rehabilitative training method (e.g., well-designed errorless training) in enhancing visuomotor adaptation for our aging population in the context of improving their movement automaticity by facilitating their compensation of visual degeneration.

Keywords: aging effect, movement automaticity, reaching, visuomotor behaviors, visual degeneration

Procedia PDF Downloads 312
78 Modern Technology-Based Methods in Neurorehabilitation for Social Competence Deficit in Children with Acquired Brain Injury

Authors: M. Saard, A. Kolk, K. Sepp, L. Pertens, L. Reinart, C. Kööp

Abstract:

Introduction: Social competence is often impaired in children with acquired brain injury (ABI), but evidence-based rehabilitation for social skills has remained undeveloped. Modern technology-based methods create effective and safe learning environments for pediatric social skills remediation. The aim of the study was to implement our structured model of neuro rehab for socio-cognitive deficit using multitouch-multiuser tabletop (MMT) computer-based platforms and virtual reality (VR) technology. Methods: 40 children aged 8-13 years (yrs) have participated in the pilot study: 30 with ABI -epilepsy, traumatic brain injury and/or tic disorder- and 10 healthy age-matched controls. From the patients, 12 have completed the training (M = 11.10 yrs, SD = 1.543) and 20 are still in training or in the waiting-list group (M = 10.69 yrs, SD = 1.704). All children performed the first individual and paired assessments. For patients, second evaluations were performed after the intervention period. Two interactive applications were implemented into rehabilitation design: Snowflake software on MMT tabletop and NoProblem on DiamondTouch Table (DTT), which allowed paired training (2 children at once). Also, in individual training sessions, HTC Vive VR device was used with VR metaphors of difficult social situations to treat social anxiety and train social skills. Results: At baseline (B) evaluations, patients had higher deficits in executive functions on the BRIEF parents’ questionnaire (M = 117, SD = 23.594) compared to healthy controls (M = 22, SD = 18.385). The most impaired components of social competence were emotion recognition, Theory of Mind skills (ToM), cooperation, verbal/non-verbal communication, and pragmatics (Friendship Observation Scale scores only 25-50% out of 100% for patients). In Sentence Completion Task and Spence Anxiety Scale, the patients reported a lack of friends, behavioral problems, bullying in school, and social anxiety. Outcome evaluations: Snowflake on MMT improved executive and cooperation skills and DTT developed communication skills, metacognitive skills, and coping. VR, video modelling and role-plays improved social attention, emotional attitude, gestural behaviors, and decreased social anxiety. NEPSY-II showed improvement in Affect Recognition [B = 7, SD = 5.01 vs outcome (O) = 10, SD = 5.85], Verbal ToM (B = 8, SD = 3.06 vs O = 10, SD = 4.08), Contextual ToM (B = 8, SD = 3.15 vs O = 11, SD = 2.87). ToM Stories test showed an improved understanding of Intentional Lying (B = 7, SD = 2.20 vs O = 10, SD = 0.50), and Sarcasm (B=6, SD = 2.20 vs O = 7, SD = 2.50). Conclusion: Neurorehabilitation based on the Structured Model of Neurorehab for Socio-Cognitive Deficit in children with ABI were effective in social skills remediation. The model helps to understand theoretical connections between components of social competence and modern interactive computerized platforms. We encourage therapists to implement these next-generation devices into the rehabilitation process as MMT and VR interfaces are motivating for children, thus ensuring good compliance. Improving children’s social skills is important for their and their families’ quality of life and social capital.

Keywords: acquired brain injury, children, social skills deficit, technology-based neurorehabilitation

Procedia PDF Downloads 120
77 Management of Femoral Neck Stress Fractures at a Specialist Centre and Predictive Factors to Return to Activity Time: An Audit

Authors: Charlotte K. Lee, Henrique R. N. Aguiar, Ralph Smith, James Baldock, Sam Botchey

Abstract:

Background: Femoral neck stress fractures (FNSF) are uncommon, making up 1 to 7.2% of stress fractures in healthy subjects. FNSFs are prevalent in young women, military recruits, endurance athletes, and individuals with energy deficiency syndrome or female athlete triad. Presentation is often non-specific and is often misdiagnosed following the initial examination. There is limited research addressing the return–to–activity time after FNSF. Previous studies have demonstrated prognostic time predictions based on various imaging techniques. Here, (1) OxSport clinic FNSF practice standards are retrospectively reviewed, (2) FNSF cohort demographics are examined, (3) Regression models were used to predict return–to–activity prognosis and consequently determine bone stress risk factors. Methods: Patients with a diagnosis of FNSF attending Oxsport clinic between 01/06/2020 and 01/01/2020 were selected from the Rheumatology Assessment Database Innovation in Oxford (RhADiOn) and OxSport Stress Fracture Database (n = 14). (1) Clinical practice was audited against five criteria based on local and National Institute for Health Care Excellence guidance, with a 100% standard. (2) Demographics of the FNSF cohort were examined with Student’s T-Test. (3) Lastly, linear regression and Random Forest regression models were used on this patient cohort to predict return–to–activity time. Consequently, an analysis of feature importance was conducted after fitting each model. Results: OxSport clinical practice met standard (100%) in 3/5 criteria. The criteria not met were patient waiting times and documentation of all bone stress risk factors. Importantly, analysis of patient demographics showed that of the population with complete bone stress risk factor assessments, 53% were positive for modifiable bone stress risk factors. Lastly, linear regression analysis was utilized to identify demographic factors that predicted return–to–activity time [R2 = 79.172%; average error 0.226]. This analysis identified four key variables that predicted return-to-activity time: vitamin D level, total hip DEXA T value, femoral neck DEXA T value, and history of an eating disorder/disordered eating. Furthermore, random forest regression models were employed for this task [R2 = 97.805%; average error 0.024]. Analysis of the importance of each feature again identified a set of 4 variables, 3 of which matched with the linear regression analysis (vitamin D level, total hip DEXA T value, and femoral neck DEXA T value) and the fourth: age. Conclusion: OxSport clinical practice could be improved by more comprehensively evaluating bone stress risk factors. The importance of this evaluation is demonstrated by the population found positive for these risk factors. Using this cohort, potential bone stress risk factors that significantly impacted return-to-activity prognosis were predicted using regression models.

Keywords: eating disorder, bone stress risk factor, femoral neck stress fracture, vitamin D

Procedia PDF Downloads 183
76 Three-Stage Least Squared Models of a Station-Level Subway Ridership: Incorporating an Analysis on Integrated Transit Network Topology Measures

Authors: Jungyeol Hong, Dongjoo Park

Abstract:

The urban transit system is a critical part of a solution to the economic, energy, and environmental challenges. Furthermore, it ultimately contributes the improvement of people’s quality of lives. For taking these kinds of advantages, the city of Seoul has tried to construct an integrated transit system including both subway and buses. The effort led to the fact that approximately 6.9 million citizens use the integrated transit system every day for their trips. Diagnosing the current transit network is a significant task to provide more convenient and pleasant transit environment. Therefore, the critical objective of this study is to establish a methodological framework for the analysis of an integrated bus-subway network and to examine the relationship between subway ridership and parameters such as network topology measures, bus demand, and a variety of commercial business facilities. Regarding a statistical approach to estimate subway ridership at a station level, many previous studies relied on Ordinary Least Square regression, but there was lack of studies considering the endogeneity issues which might show in the subway ridership prediction model. This study focused on both discovering the impacts of integrated transit network topology measures and endogenous effect of bus demand on subway ridership. It could ultimately contribute to developing more accurate subway ridership estimation accounting for its statistical bias. The spatial scope of the study covers Seoul city in South Korea, and it includes 243 subway stations and 10,120 bus stops with the temporal scope set during twenty-four hours with one-hour interval time panels each. The subway and bus ridership information in detail was collected from the Seoul Smart Card data in 2015 and 2016. First, integrated subway-bus network topology measures which have characteristics regarding connectivity, centrality, transitivity, and reciprocity were estimated based on the complex network theory. The results of integrated transit network topology analysis were compared to subway-only network topology. Also, the non-recursive approach which is Three-Stage Least Square was applied to develop the daily subway ridership model as capturing the endogeneity between bus and subway demands. Independent variables included roadway geometry, commercial business characteristics, social-economic characteristics, safety index, transit facility attributes, and dummies for seasons and time zone. Consequently, it was found that network topology measures were significant size effect. Especially, centrality measures showed that the elasticity was a change of 4.88% for closeness centrality, 24.48% for betweenness centrality while the elasticity of bus ridership was 8.85%. Moreover, it was proved that bus demand and subway ridership were endogenous in a non-recursive manner as showing that predicted bus ridership and predicted subway ridership is statistically significant in OLS regression models. Therefore, it shows that three-stage least square model appears to be a plausible model for efficient subway ridership estimation. It is expected that the proposed approach provides a reliable guideline that can be used as part of the spectrum of tools for evaluating a city-wide integrated transit network.

Keywords: integrated transit system, network topology measures, three-stage least squared, endogeneity, subway ridership

Procedia PDF Downloads 177
75 Earthquake Preparedness of School Community and E-PreS Project

Authors: A. Kourou, A. Ioakeimidou, S. Hadjiefthymiades, V. Abramea

Abstract:

During the last decades, the task of engaging governments, communities and citizens to reduce risk and vulnerability of the populations has made variable progress. Experience has demonstrated that lack of awareness, education and preparedness may result in significant material and other losses both on the onset of the disaster. Schools play a vital role in the community and are important elements of values and culture of the society. A proper school education not only teaches children, but also is a key factor in the promotion of a safety culture into the wider community. In Greece School Earthquake Safety Initiative has been undertaken by Earthquake Planning and Protection Ogranization with specific actions (seminars, lectures, guidelines, educational material, campaigns, national or EU projects, drills etc.). The objective of this initiative is to develop disaster-resilient school communities through awareness, self-help, cooperation and education. School preparedness requires the participation of Principals, teachers, students, parents, and competent authorities. Preparation and earthquake readiness involves: a) learning what should be done before, during, and after earthquake; b) doing or preparing to do these things now, before the next earthquake; and c) developing teachers’ and students’ skills to cope efficiently in case of an earthquake. In the above given framework this paper presents the results of a survey aimed to identify the level of education and preparedness of school community in Greece. More specifically, the survey questionnaire investigates issues regarding earthquake protection actions, appropriate attitudes and behaviors during an earthquake and existence of contingency plans at elementary and secondary schools. The questionnaires were administered to Principals and teachers from different regions of the country that attend the EPPO national training project 'Earthquake Safety at Schools'. A closed-form questionnaire was developed for the survey, which contained questions regarding the following: a) knowledge of self protective actions b) existence of emergency planning at home and c) existence of emergency planning at school (hazard mitigation actions, evacuation plan, and performance of drills). Survey results revealed that a high percentage of teachers have taken the appropriate preparedness measures concerning non-structural hazards at schools, emergency school plan and simulation drills every year. In order to improve the action-planning for ongoing school disaster risk reduction, the implementation of earthquake drills, the involvement of students with disabilities and the evaluation of school emergency plans, EPPO participates in E-PreS project. The main objective of this project is to create smart tools which define, simulate and evaluate all hazards emergency steps customized to the unique district and school. The project comes up with a holistic methodology using real-time evaluation involving different categories of actors, districts, steps and metrics. The project is supported by EU Civil Protection Financial Instrument with a duration of two years. Coordinator is the Kapodistrian University of Athens and partners are from four countries; Greece, Italy, Romania and Bulgaria.

Keywords: drills, earthquake, emergency plans, E-PreS project

Procedia PDF Downloads 235
74 A Convolution Neural Network PM-10 Prediction System Based on a Dense Measurement Sensor Network in Poland

Authors: Piotr A. Kowalski, Kasper Sapala, Wiktor Warchalowski

Abstract:

PM10 is a suspended dust that primarily has a negative effect on the respiratory system. PM10 is responsible for attacks of coughing and wheezing, asthma or acute, violent bronchitis. Indirectly, PM10 also negatively affects the rest of the body, including increasing the risk of heart attack and stroke. Unfortunately, Poland is a country that cannot boast of good air quality, in particular, due to large PM concentration levels. Therefore, based on the dense network of Airly sensors, it was decided to deal with the problem of prediction of suspended particulate matter concentration. Due to the very complicated nature of this issue, the Machine Learning approach was used. For this purpose, Convolution Neural Network (CNN) neural networks have been adopted, these currently being the leading information processing methods in the field of computational intelligence. The aim of this research is to show the influence of particular CNN network parameters on the quality of the obtained forecast. The forecast itself is made on the basis of parameters measured by Airly sensors and is carried out for the subsequent day, hour after hour. The evaluation of learning process for the investigated models was mostly based upon the mean square error criterion; however, during the model validation, a number of other methods of quantitative evaluation were taken into account. The presented model of pollution prediction has been verified by way of real weather and air pollution data taken from the Airly sensor network. The dense and distributed network of Airly measurement devices enables access to current and archival data on air pollution, temperature, suspended particulate matter PM1.0, PM2.5, and PM10, CAQI levels, as well as atmospheric pressure and air humidity. In this investigation, PM2.5, and PM10, temperature and wind information, as well as external forecasts of temperature and wind for next 24h served as inputted data. Due to the specificity of the CNN type network, this data is transformed into tensors and then processed. This network consists of an input layer, an output layer, and many hidden layers. In the hidden layers, convolutional and pooling operations are performed. The output of this system is a vector containing 24 elements that contain prediction of PM10 concentration for the upcoming 24 hour period. Over 1000 models based on CNN methodology were tested during the study. During the research, several were selected out that give the best results, and then a comparison was made with the other models based on linear regression. The numerical tests carried out fully confirmed the positive properties of the presented method. These were carried out using real ‘big’ data. Models based on the CNN technique allow prediction of PM10 dust concentration with a much smaller mean square error than currently used methods based on linear regression. What's more, the use of neural networks increased Pearson's correlation coefficient (R²) by about 5 percent compared to the linear model. During the simulation, the R² coefficient was 0.92, 0.76, 0.75, 0.73, and 0.73 for 1st, 6th, 12th, 18th, and 24th hour of prediction respectively.

Keywords: air pollution prediction (forecasting), machine learning, regression task, convolution neural networks

Procedia PDF Downloads 149
73 Cross-Cultural Conflict Management in Transnational Business Relationships: A Qualitative Study with Top Executives in Chinese, German and Middle Eastern Cases

Authors: Sandra Hartl, Meena Chavan

Abstract:

This paper presents the outcome of a four year Ph.D. research on cross-cultural conflict management in transnational business relationships. An important and complex problem about managing conflicts that arise across cultures in business relationships is investigated, and conflict resolution strategies are identified. This paper particularly focuses on transnational relationships within a Chinese, German and Middle Eastern framework. Unlike many papers on this issue which have been built on experiments with international MBA students, this research provides real-life cases of cross-cultural conflicts which are not easy to capture. Its uniqueness is underpinned as the real case data was gathered by interviewing top executives at management positions in large multinational corporations through a qualitative case study method approach. This paper makes a valuable contribution to the theory of cross-cultural conflicts, and despite the sensitivity, this research primarily presents real-time business data about breaches of contracts between two counterparties engaged in transnational operating organizations. The overarching aim of this research is to identify the degree of significance for the cultural factors and the communication factors embedded in cross-cultural business conflicts. It questions from a cultural perspective what factors lead to the conflicts in each of the cases, what the causes are and the role of culture in identifying effective strategies for resolving international disputes in an increasingly globalized business world. The results of 20 face to face interviews are outlined, which were conducted, recorded, transcribed and then analyzed using the NVIVO qualitative data analysis system. The outcomes make evident that the factors leading to conflicts are broadly organized under seven themes, which are communication, cultural difference, environmental issues, work structures, knowledge and skills, cultural anxiety and personal characteristics. When evaluating the causes of the conflict it is to notice that these are rather multidimensional. Irrespective of the conflict types (relationship or task-based conflict or due to individual personal differences), relationships are almost always an element of all conflicts. Cultural differences, which are a critical factor for conflicts, result from different cultures placing different levels of importance on relationships. Communication issues which are another cause of conflict also reflect different relationships styles favored by different cultures. In identifying effective strategies for solving cross-cultural business conflicts this research identifies that solutions need to consider the national cultures (country specific characteristics), organizational cultures and individual culture, of the persons engaged in the conflict and how these are interlinked to each other. Outcomes identify practical dispute resolution strategies to resolve cross-cultural business conflicts in reference to communication, empathy and training to improve cultural understanding and cultural competence, through the use of mediation. To conclude, the findings of this research will not only add value to academic knowledge of cross-cultural conflict management across transnational businesses but will also add value to numerous cross-border business relationships worldwide. Above all it identifies the influence of cultures and communication and cross-cultural competence in reducing cross-cultural business conflicts in transnational business.

Keywords: business conflict, conflict management, cross-cultural communication, dispute resolution

Procedia PDF Downloads 163
72 Blended Learning in a Mathematics Classroom: A Focus in Khan Academy

Authors: Sibawu Witness Siyepu

Abstract:

This study explores the effects of instructional design using blended learning in the learning of radian measures among Engineering students. Blended learning is an education programme that combines online digital media with traditional classroom methods. It requires the physical presence of both lecturer and student in a mathematics computer laboratory. Blended learning provides element of class control over time, place, path or pace. The focus was on the use of Khan Academy to supplement traditional classroom interactions. Khan Academy is a non-profit educational organisation created by educator Salman Khan with a goal of creating an accessible place for students to learn through watching videos in a computer assisted computer. The researcher who is an also lecturer in mathematics support programme collected data through instructing students to watch Khan Academy videos on radian measures, and by supplying students with traditional classroom activities. Classroom activities entails radian measure activities extracted from the Internet. Students were given an opportunity to engage in class discussions, social interactions and collaborations. These activities necessitated students to write formative assessments tests. The purpose of formative assessments tests was to find out about the students’ understanding of radian measures, including errors and misconceptions they displayed in their calculations. Identification of errors and misconceptions serve as pointers of students’ weaknesses and strengths in their learning of radian measures. At the end of data collection, semi-structure interviews were administered to a purposefully sampled group to explore their perceptions and feedback regarding the use of blended learning approach in teaching and learning of radian measures. The study employed Algebraic Insight Framework to analyse data collected. Algebraic Insight Framework is a subset of symbol sense which allows a student to correctly enter expressions into a computer assisted systems efficiently. This study offers students opportunities to enter topics and subtopics on radian measures into a computer through the lens of Khan Academy. Khan academy demonstrates procedures followed to reach solutions of mathematical problems. The researcher performed the task of explaining mathematical concepts and facilitated the process of reinvention of rules and formulae in the learning of radian measures. Lastly, activities that reinforce students’ understanding of radian were distributed. Results showed that this study enthused the students in their learning of radian measures. Learning through videos prompted the students to ask questions which brought about clarity and sense making to the classroom discussions. Data revealed that sense making through reinvention of rules and formulae assisted the students in enhancing their learning of radian measures. This study recommends the use of Khan Academy in blended learning to be introduced as a socialisation programme to all first year students. This will prepare students that are computer illiterate to become conversant with the use of Khan Academy as a powerful tool in the learning of mathematics. Khan Academy is a key technological tool that is pivotal for the development of students’ autonomy in the learning of mathematics and that promotes collaboration with lecturers and peers.

Keywords: algebraic insight framework, blended learning, Khan Academy, radian measures

Procedia PDF Downloads 310
71 Deep Learning Based Text to Image Synthesis for Accurate Facial Composites in Criminal Investigations

Authors: Zhao Gao, Eran Edirisinghe

Abstract:

The production of an accurate sketch of a suspect based on a verbal description obtained from a witness is an essential task for most criminal investigations. The criminal investigation system employs specifically trained professional artists to manually draw a facial image of the suspect according to the descriptions of an eyewitness for subsequent identification. Within the advancement of Deep Learning, Recurrent Neural Networks (RNN) have shown great promise in Natural Language Processing (NLP) tasks. Additionally, Generative Adversarial Networks (GAN) have also proven to be very effective in image generation. In this study, a trained GAN conditioned on textual features such as keywords automatically encoded from a verbal description of a human face using an RNN is used to generate photo-realistic facial images for criminal investigations. The intention of the proposed system is to map corresponding features into text generated from verbal descriptions. With this, it becomes possible to generate many reasonably accurate alternatives to which the witness can use to hopefully identify a suspect from. This reduces subjectivity in decision making both by the eyewitness and the artist while giving an opportunity for the witness to evaluate and reconsider decisions. Furthermore, the proposed approach benefits law enforcement agencies by reducing the time taken to physically draw each potential sketch, thus increasing response times and mitigating potentially malicious human intervention. With publically available 'CelebFaces Attributes Dataset' (CelebA) and additionally providing verbal description as training data, the proposed architecture is able to effectively produce facial structures from given text. Word Embeddings are learnt by applying the RNN architecture in order to perform semantic parsing, the output of which is fed into the GAN for synthesizing photo-realistic images. Rather than the grid search method, a metaheuristic search based on genetic algorithms is applied to evolve the network with the intent of achieving optimal hyperparameters in a fraction the time of a typical brute force approach. With the exception of the ‘CelebA’ training database, further novel test cases are supplied to the network for evaluation. Witness reports detailing criminals from Interpol or other law enforcement agencies are sampled on the network. Using the descriptions provided, samples are generated and compared with the ground truth images of a criminal in order to calculate the similarities. Two factors are used for performance evaluation: The Structural Similarity Index (SSIM) and the Peak Signal-to-Noise Ratio (PSNR). A high percentile output from this performance matrix should attribute to demonstrating the accuracy, in hope of proving that the proposed approach can be an effective tool for law enforcement agencies. The proposed approach to criminal facial image generation has potential to increase the ratio of criminal cases that can be ultimately resolved using eyewitness information gathering.

Keywords: RNN, GAN, NLP, facial composition, criminal investigation

Procedia PDF Downloads 162
70 Confidence Envelopes for Parametric Model Selection Inference and Post-Model Selection Inference

Authors: I. M. L. Nadeesha Jayaweera, Adao Alex Trindade

Abstract:

In choosing a candidate model in likelihood-based modeling via an information criterion, the practitioner is often faced with the difficult task of deciding just how far up the ranked list to look. Motivated by this pragmatic necessity, we construct an uncertainty band for a generalized (model selection) information criterion (GIC), defined as a criterion for which the limit in probability is identical to that of the normalized log-likelihood. This includes common special cases such as AIC & BIC. The method starts from the asymptotic normality of the GIC for the joint distribution of the candidate models in an independent and identically distributed (IID) data framework and proceeds by deriving the (asymptotically) exact distribution of the minimum. The calculation of an upper quantile for its distribution then involves the computation of multivariate Gaussian integrals, which is amenable to efficient implementation via the R package "mvtnorm". The performance of the methodology is tested on simulated data by checking the coverage probability of nominal upper quantiles and compared to the bootstrap. Both methods give coverages close to nominal for large samples, but the bootstrap is two orders of magnitude slower. The methodology is subsequently extended to two other commonly used model structures: regression and time series. In the regression case, we derive the corresponding asymptotically exact distribution of the minimum GIC invoking Lindeberg-Feller type conditions for triangular arrays and are thus able to similarly calculate upper quantiles for its distribution via multivariate Gaussian integration. The bootstrap once again provides a default competing procedure, and we find that similar comparison performance metrics hold as for the IID case. The time series case is complicated by far more intricate asymptotic regime for the joint distribution of the model GIC statistics. Under a Gaussian likelihood, the default in most packages, one needs to derive the limiting distribution of a normalized quadratic form for a realization from a stationary series. Under conditions on the process satisfied by ARMA models, a multivariate normal limit is once again achieved. The bootstrap can, however, be employed for its computation, whence we are once again in the multivariate Gaussian integration paradigm for upper quantile evaluation. Comparisons of this bootstrap-aided semi-exact method with the full-blown bootstrap once again reveal a similar performance but faster computation speeds. One of the most difficult problems in contemporary statistical methodological research is to be able to account for the extra variability introduced by model selection uncertainty, the so-called post-model selection inference (PMSI). We explore ways in which the GIC uncertainty band can be inverted to make inferences on the parameters. This is being attempted in the IID case by pivoting the CDF of the asymptotically exact distribution of the minimum GIC. For inference one parameter at a time and a small number of candidate models, this works well, whence the attained PMSI confidence intervals are wider than the MLE-based Wald, as expected.

Keywords: model selection inference, generalized information criteria, post model selection, Asymptotic Theory

Procedia PDF Downloads 89
69 An Exploration of Special Education Teachers’ Practices in a Preschool Intellectual Disability Centre in Saudi Arabia

Authors: Faris Algahtani

Abstract:

Background: In Saudi Arabia, it is essential to know what practices are employed and considered effective by special education teachers working with preschool children with intellectual disabilities, as a prerequisite for identifying areas for improvement. Preschool provision for these children is expanding through a network of Intellectual Disability Centres while, in primary schools, a policy of inclusion is pursued and, in mainstream preschools, pilots have been aimed at enhancing learning in readiness for primary schooling. This potentially widens the attainment gap between preschool children with and without intellectual disabilities, and influences the scope for improvement. Goal: The aim of the study was to explore special education teachers’ practices and perceived perceptions of those practices for preschool children with intellectual disabilities in Saudi Arabia Method: A qualitative interpretive approach was adopted in order to gain a detailed understanding of how special education teachers in an IDC operate in the classroom. Fifteen semi-structured interviews were conducted with experienced and qualified teachers. Data were analysed using thematic analysis, based on themes identified from the literature review together with new themes emerging from the data. Findings: American methods strongly influenced teaching practices, in particular TEACCH (Treatment and Education of Autistic and Communication related handicapped Children), which emphasises structure, schedules and specific methods of teaching tasks and skills; and ABA (Applied Behaviour Analysis), which aims to improve behaviours and skills by concentrating on detailed breakdown and teaching of task components and rewarding desired behaviours with positive reinforcement. The Islamic concept of education strongly influenced which teaching techniques were used and considered effective, and how they were applied. Tensions were identified between the Islamic approach to disability, which accepts differences between human beings as created by Allah in order for people to learn to help and love each other, and the continuing stigmatisation of disability in many Arabic cultures, which means that parents who bring their children to an IDC often hope and expect that their children will be ‘cured’. Teaching methods were geared to reducing behavioural problems and social deficits rather than to developing the potential of the individual child, with some teachers recognizing the child’s need for greater freedom. Relationships with parents could in many instances be improved. Teachers considered both initial teacher education and professional development to be inadequate for their needs and the needs of the children they teach. This can be partly attributed to the separation of training and development of special education teachers from that of general teachers. Conclusion: Based on the findings, teachers’ practices could be improved by the inclusion of general teaching strategies, parent-teacher relationships and practical teaching experience in both initial teacher education and professional development. Coaching and mentoring support from carefully chosen special education teachers could assist the process, as could the presence of a second teacher or teaching assistant in the classroom.

Keywords: special education, intellectual disabilities, early intervention , early childhood

Procedia PDF Downloads 137
68 Biomimicked Nano-Structured Coating Elaboration by Soft Chemistry Route for Self-Cleaning and Antibacterial Uses

Authors: Elodie Niemiec, Philippe Champagne, Jean-Francois Blach, Philippe Moreau, Anthony Thuault, Arnaud Tricoteaux

Abstract:

Hygiene of equipment in contact with users is an important issue in the railroad industry. The numerous cleanings to eliminate bacteria and dirt cost a lot. Besides, mechanical solicitations on contact parts are observed daily. It should be interesting to elaborate on a self-cleaning and antibacterial coating with sufficient adhesion and good resistance against mechanical and chemical solicitations. Thus, a Hauts-de-France and Maubeuge Val-de-Sambre conurbation authority co-financed Ph.D. thesis has been set up since October 2017 based on anterior studies carried by the Laboratory of Ceramic Materials and Processing. To accomplish this task, a soft chemical route has been implemented to bring a lotus effect on metallic substrates. It involves nanometric liquid zinc oxide synthesis under 100°C. The originality here consists in a variation of surface texturing by modification of the synthesis time of the species in solution. This helps to adjust wettability. Nanostructured zinc oxide has been chosen because of the inherent photocatalytic effect, which can activate organic substance degradation. Two methods of heating have been compared: conventional and microwave assistance. Tested subtracts are made of stainless steel to conform to transport uses. Substrate preparation was the first step of this protocol: a meticulous cleaning of the samples is applied. The main goal of the elaboration protocol is to fix enough zinc-based seeds to make them grow during the next step as desired (nanorod shaped). To improve this adhesion, a silica gel has been formulated and optimized to ensure chemical bonding between substrate and zinc seeds. The last step consists of deposing a wide carbonated organosilane to improve the superhydrophobic property of the coating. The quasi-proportionality between the reaction time and the nanorod length will be demonstrated. Water Contact (superior to 150°) and Roll-off Angle at different steps of the process will be presented. The antibacterial effect has been proved with Escherichia Coli, Staphylococcus Aureus, and Bacillus Subtilis. The mortality rate is found to be four times superior to a non-treated substrate. Photocatalytic experiences were carried out from different dyed solutions in contact with treated samples under UV irradiation. Spectroscopic measurements allow to determinate times of degradation according to the zinc quantity available on the surface. The final coating obtained is, therefore, not a monolayer but rather a set of amorphous/crystalline/amorphous layers that have been characterized by spectroscopic ellipsometry. We will show that the thickness of the nanostructured oxide layer depends essentially on the synthesis time set in the hydrothermal growth step. A green, easy-to-process and control coating with self-cleaning and antibacterial properties has been synthesized with a satisfying surface structuration.

Keywords: antibacterial, biomimetism, soft-chemistry, zinc oxide

Procedia PDF Downloads 143
67 Novel Numerical Technique for Dusty Plasma Dynamics (Yukawa Liquids): Microfluidic and Role of Heat Transport

Authors: Aamir Shahzad, Mao-Gang He

Abstract:

Currently, dusty plasmas motivated the researchers' widespread interest. Since the last two decades, substantial efforts have been made by the scientific and technological community to investigate the transport properties and their nonlinear behavior of three-dimensional and two-dimensional nonideal complex (dusty plasma) liquids (NICDPLs). Different calculations have been made to sustain and utilize strongly coupled NICDPLs because of their remarkable scientific and industrial applications. Understanding of the thermophysical properties of complex liquids under various conditions is of practical interest in the field of science and technology. The determination of thermal conductivity is also a demanding question for thermophysical researchers, due to some reasons; very few results are offered for this significant property. Lack of information of the thermal conductivity of dense and complex liquids at different parameters related to the industrial developments is a major barrier to quantitative knowledge of the heat flux flow from one medium to another medium or surface. The exact numerical investigation of transport properties of complex liquids is a fundamental research task in the field of thermophysics, as various transport data are closely related with the setup and confirmation of equations of state. A reliable knowledge of transport data is also important for an optimized design of processes and apparatus in various engineering and science fields (thermoelectric devices), and, in particular, the provision of precise data for the parameters of heat, mass, and momentum transport is required. One of the promising computational techniques, the homogenous nonequilibrium molecular dynamics (HNEMD) simulation, is over viewed with a special importance on the application to transport problems of complex liquids. This proposed work is particularly motivated by the FIRST TIME to modify the problem of heat conduction equations leads to polynomial velocity and temperature profiles algorithm for the investigation of transport properties with their nonlinear behaviors in the NICDPLs. The aim of proposed work is to implement a NEMDS algorithm (Poiseuille flow) and to delve the understanding of thermal conductivity behaviors in Yukawa liquids. The Yukawa system is equilibrated through the Gaussian thermostat in order to maintain the constant system temperature (canonical ensemble ≡ NVT)). The output steps will be developed between 3.0×105/ωp and 1.5×105/ωp simulation time steps for the computation of λ data. The HNEMD algorithm shows that the thermal conductivity is dependent on plasma parameters and the minimum value of lmin shifts toward higher G with an increase in k, as expected. New investigations give more reliable simulated data for the plasma conductivity than earlier known simulation data and generally the plasma λ0 by 2%-20%, depending on Γ and κ. It has been shown that the obtained results at normalized force field are in satisfactory agreement with various earlier simulation results. This algorithm shows that the new technique provides more accurate results with fast convergence and small size effects over a wide range of plasma states.

Keywords: molecular dynamics simulation, thermal conductivity, nonideal complex plasma, Poiseuille flow

Procedia PDF Downloads 274
66 Participatory Monitoring Strategy to Address Stakeholder Engagement Impact in Co-creation of NBS Related Project: The OPERANDUM Case

Authors: Teresa Carlone, Matteo Mannocchi

Abstract:

In the last decade, a growing number of International Organizations are pushing toward green solutions for adaptation to climate change. This is particularly true in the field of Disaster Risk Reduction (DRR) and land planning, where Nature-Based Solutions (NBS) had been sponsored through funding programs and planning tools. Stakeholder engagement and co-creation of NBS is growing as a practice and research field in environmental projects, fostering the consolidation of a multidisciplinary socio-ecological approach in addressing hydro-meteorological risk. Even thou research and financial interests are constantly spread, the NBS mainstreaming process is still at an early stage as innovative concepts and practices make it difficult to be fully accepted and adopted by a multitude of different actors to produce wide scale societal change. The monitoring and impact evaluation of stakeholders’ participation in these processes represent a crucial aspect and should be seen as a continuous and integral element of the co-creation approach. However, setting up a fit for purpose-monitoring strategy for different contexts is not an easy task, and multiple challenges emerge. In this scenario, the Horizon 2020 OPERANDUM project, designed to address the major hydro-meteorological risks that negatively affect European rural and natural territories through the co-design, co-deployment, and assessment of Nature-based Solution, represents a valid case study to test a monitoring strategy from which set a broader, general and scalable monitoring framework. Applying a participative monitoring methodology, based on selected indicators list that combines quantitative and qualitative data developed within the activity of the project, the paper proposes an experimental in-depth analysis of the stakeholder engagement impact in the co-creation process of NBS. The main focus will be to spot and analyze which factors increase knowledge, social acceptance, and mainstreaming of NBS, promoting also a base-experience guideline to could be integrated with the stakeholder engagement strategy in current and future similar strongly collaborative approach-based environmental projects, such as OPERANDUM. Measurement will be carried out through survey submitted at a different timescale to the same sample (stakeholder: policy makers, business, researchers, interest groups). Changes will be recorded and analyzed through focus groups in order to highlight causal explanation and to assess the proposed list of indicators to steer the conduction of similar activities in other projects and/or contexts. The idea of the paper is to contribute to the construction of a more structured and shared corpus of indicators that can support the evaluation of the activities of involvement and participation of various levels of stakeholders in the co-production, planning, and implementation of NBS to address climate change challenges.

Keywords: co-creation and collaborative planning, monitoring, nature-based solution, participation & inclusion, stakeholder engagement

Procedia PDF Downloads 114
65 Comparing Radiographic Detection of Simulated Syndesmosis Instability Using Standard 2D Fluoroscopy Versus 3D Cone-Beam Computed Tomography

Authors: Diane Ghanem, Arjun Gupta, Rohan Vijayan, Ali Uneri, Babar Shafiq

Abstract:

Introduction: Ankle sprains and fractures often result in syndesmosis injuries. Unstable syndesmotic injuries result from relative motion between the distal ends of the tibia and fibula, anatomic juncture which should otherwise be rigid, and warrant operative management. Clinical and radiological evaluations of intraoperative syndesmosis stability remain a challenging task as traditional 2D fluoroscopy is limited to a uniplanar translational displacement. The purpose of this pilot cadaveric study is to compare the 2D fluoroscopy and 3D cone beam computed tomography (CBCT) stress-induced syndesmosis displacements. Methods: Three fresh-frozen lower legs underwent 2D fluoroscopy and 3D CIOS CBCT to measure syndesmosis position before dissection. Syndesmotic injury was simulated by resecting the (1) anterior inferior tibiofibular ligament (AITFL), the (2) posterior inferior tibiofibular ligament (PITFL) and the inferior transverse ligament (ITL) simultaneously, followed by the (3) interosseous membrane (IOM). Manual external rotation and Cotton stress test were performed after each of the three resections and 2D and 3D images were acquired. Relevant 2D and 3D parameters included the tibiofibular overlap (TFO), tibiofibular clear space (TCS), relative rotation of the fibula, and anterior-posterior (AP) and medial-lateral (ML) translations of the fibula relative to the tibia. Parameters were measured by two independent observers. Inter-rater reliability was assessed by intraclass correlation coefficient (ICC) to determine measurement precision. Results: Significant mismatches were found in the trends between the 2D and 3D measurements when assessing for TFO, TCS and AP translation across the different resection states. Using 3D CBCT, TFO was inversely proportional to the number of resected ligaments while TCS was directly proportional to the latter across all cadavers and ‘resection + stress’ states. Using 2D fluoroscopy, this trend was not respected under the Cotton stress test. 3D AP translation did not show a reliable trend whereas 2D AP translation of the fibula was positive under the Cotton stress test and negative under the external rotation. 3D relative rotation of the fibula, assessed using the Tang et al. ratio method and Beisemann et al. angular method, suggested slight overall internal rotation with complete resection of the ligaments, with a change < 2mm - threshold which corresponds to the commonly used buffer to account for physiologic laxity as per clinical judgment of the surgeon. Excellent agreement (>0.90) was found between the two independent observers for each of the parameters in both 2D and 3D (overall ICC 0.9968, 95% CI 0.995 - 0.999). Conclusions: The 3D CIOS CBCT appears to reliably depict the trend in TFO and TCS. This might be due to the additional detection of relevant rotational malpositions of the fibula in comparison to the standard 2D fluoroscopy which is limited to a single plane translation. A better understanding of 3D imaging may help surgeons identify the precise measurements planes needed to achieve better syndesmosis repair.

Keywords: 2D fluoroscopy, 3D computed tomography, image processing, syndesmosis injury

Procedia PDF Downloads 70
64 A Comparison Between Different Discretization Techniques for the Doyle-Fuller-Newman Li+ Battery Model

Authors: Davide Gotti, Milan Prodanovic, Sergio Pinilla, David Muñoz-Torrero

Abstract:

Since its proposal, the Doyle-Fuller-Newman (DFN) lithium-ion battery model has gained popularity in the electrochemical field. In fact, this model provides the user with theoretical support for designing the lithium-ion battery parameters, such as the material particle or the diffusion coefficient adjustment direction. However, the model is mathematically complex as it is composed of several partial differential equations (PDEs) such as Fick’s law of diffusion, the MacInnes and Ohm’s equations, among other phenomena. Thus, to efficiently use the model in a time-domain simulation environment, the selection of the discretization technique is of a pivotal importance. There are several numerical methods available in the literature that can be used to carry out this task. In this study, a comparison between the explicit Euler, Crank-Nicolson, and Chebyshev discretization methods is proposed. These three methods are compared in terms of accuracy, stability, and computational times. Firstly, the explicit Euler discretization technique is analyzed. This method is straightforward to implement and is computationally fast. In this work, the accuracy of the method and its stability properties are shown for the electrolyte diffusion partial differential equation. Subsequently, the Crank-Nicolson method is considered. It represents a combination of the implicit and explicit Euler methods that has the advantage of being of the second order in time and is intrinsically stable, thus overcoming the disadvantages of the simpler Euler explicit method. As shown in the full paper, the Crank-Nicolson method provides accurate results when applied to the DFN model. Its stability does not depend on the integration time step, thus it is feasible for both short- and long-term tests. This last remark is particularly important as this discretization technique would allow the user to implement parameter estimation and optimization techniques such as system or genetic parameter identification methods using this model. Finally, the Chebyshev discretization technique is implemented in the DFN model. This discretization method features swift convergence properties and, as other spectral methods used to solve differential equations, achieves the same accuracy with a smaller number of discretization nodes. However, as shown in the literature, these methods are not suitable for handling sharp gradients, which are common during the first instants of the charge and discharge phases of the battery. The numerical results obtained and presented in this study aim to provide the guidelines on how to select the adequate discretization technique for the DFN model according to the type of application to be performed, highlighting the pros and cons of the three methods. Specifically, the non-eligibility of the simple Euler method for longterm tests will be presented. Afterwards, the Crank-Nicolson and the Chebyshev discretization methods will be compared in terms of accuracy and computational times under a wide range of battery operating scenarios. These include both long-term simulations for aging tests, and short- and mid-term battery charge/discharge cycles, typically relevant in battery applications like grid primary frequency and inertia control and electrical vehicle breaking and acceleration.

Keywords: Doyle-Fuller-Newman battery model, partial differential equations, discretization, numerical methods

Procedia PDF Downloads 23
63 Improved Approach to the Treatment of Resistant Breast Cancer

Authors: Lola T. Alimkhodjaeva, Lola T. Zakirova, Soniya S. Ziyavidenova

Abstract:

Background: Breast cancer (BC) is still one of the urgent oncology problems. The essential obstacle to the full anti-tumor therapy implementation is drug resistance development. Taking into account the fact that chemotherapy is main antitumor treatment in BC patients, the important task is to improve treatment results. Certain success in overcoming this situation has been associated with the use of methods of extracorporeal blood treatment (ECBT), plasmapheresis. Materials and Methods: We examined 129 women with resistant BC stages 3-4, aged between 56 to 62 years who had previously received 2 courses of CAF chemotherapy. All patients additionally underwent 2 courses of CAF chemotherapy but against the background ECBT with ultrasonic exposure. We studied the following parameters: 1. The highlights of peripheral blood before and after therapy. 2. The state of cellular immunity and identification of activation markers CD23 +, CD25 +, CD38 +, CD95 + on lymphocytes was performed using monoclonal antibodies. Evaluation of humoral immunity was determined by the level of main classes of immunoglobulins IgG, IgA, IgM in serum. 3. The degree of tumor regression was assessed by WHO recommended 4 gradations. (complete - 100%, partial - more than 50% of initial size, process stabilization–regression is less than 50% of initial size and tumor advance progressing). 4. Medical pathomorphism in the tumor was determined by Lavnikova. 5. The study of immediate and remote results, up to 3 years and more. Results and Discussion: After performing extracorporeal blood treatment anemia occurred in 38.9%, leukopenia in 36.8%, thrombocytopenia in 34.6%, hypolymphemia in 26.8%. Studies of immunoglobulin fractions in blood serum were able to establish a certain relationship between the classes of immunoglobulin A, G, M and their functions. The results showed that after treatment the values of main immunoglobulins in patients’ serum approximated to normal. Analysis of expression of activation markers CD25 + cells bearing receptors for IL-2 (IL-2Rα chain) and CD95 + lymphocytes that were mediated physiological apoptosis showed the tendency to increase, which apparently was due to activation of cellular immunity cytokines allocated by ultrasonic treatment. To carry out ECBT on the background of ultrasonic treatment improved the parameters of the immune system, which were expressed in stimulation of cellular immunity and correcting imbalances in humoral immunity. The key indicator of conducted treatment efficiency is the immediate result measured by the degree of tumor regression. After ECBT performance the complete regression was 10.3%, partial response - 55.5%, process stabilization - 34.5%, tumor advance progressing no observed. Morphological investigations of tumor determined therapeutic pathomorphism grade 2 in 15%, in 25% - grade 3 and therapeutic pathomorphism grade 4 in 60% of patients. One of the main criteria for the effect of conducted treatment is to study the remission terms in the postoperative period (up to 3 years or more). The remission terms up to 3 years with ECBT was 34.5%, 5-year survival was 54%. Carried out research suggests that a comprehensive study of immunological and clinical course of breast cancer allows the differentiated approach to the choice of methods for effective treatment.

Keywords: breast cancer, immunoglobulins, extracorporeal blood treatment, chemotherapy

Procedia PDF Downloads 275